Re-Re commited on
Commit
512adb5
1 Parent(s): 3950fd4

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1725613561.4ed841473a2d ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdbec96bac0cb5fb2b9a35c75ada0d6a84615e0c5a3335f8f8ecd6d79c40c4cc
3
+ size 465016
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 11.86 +/- 6.87
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 10.63 +/- 6.83
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/best_000001717_7032832_reward_28.340.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daa6b044cb522e5cb05c9082815a6e6a2c6fc67c28b98e30214ed3a0b4bd6027
3
+ size 34929243
checkpoint_p0/checkpoint_000001755_7188480.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31eb06ab5d211252ea83821851b29ad4d0597dd012bd58edd34077febd8e93e7
3
+ size 34929669
checkpoint_p0/checkpoint_000001833_7507968.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c64ef714f7762cad90ec6e52361f92dbadefb351528b2e9960891dd4f0bfee24
3
+ size 34929669
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 5000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 7500000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c49871602fc5a116f18f6ade3e48d8382390811cf358f940d7063b0c919ff633
3
- size 22845820
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69c2e0d51b9dbf8963d4319ec5837c494a3bc0eb9889b1dbc46d480404b78135
3
+ size 20775831
sf_log.txt CHANGED
@@ -1902,3 +1902,1047 @@ main_loop: 294.6944
1902
  [2024-09-06 09:01:12,231][01070] Avg episode rewards: #0: 29.156, true rewards: #0: 11.856
1903
  [2024-09-06 09:01:12,233][01070] Avg episode reward: 29.156, avg true_objective: 11.856
1904
  [2024-09-06 09:02:25,893][01070] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1902
  [2024-09-06 09:01:12,231][01070] Avg episode rewards: #0: 29.156, true rewards: #0: 11.856
1903
  [2024-09-06 09:01:12,233][01070] Avg episode reward: 29.156, avg true_objective: 11.856
1904
  [2024-09-06 09:02:25,893][01070] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1905
+ [2024-09-06 09:02:31,093][01070] The model has been pushed to https://huggingface.co/Re-Re/rl_course_vizdoom_health_gathering_supreme
1906
+ [2024-09-06 09:06:01,283][01070] Environment doom_basic already registered, overwriting...
1907
+ [2024-09-06 09:06:01,285][01070] Environment doom_two_colors_easy already registered, overwriting...
1908
+ [2024-09-06 09:06:01,287][01070] Environment doom_two_colors_hard already registered, overwriting...
1909
+ [2024-09-06 09:06:01,290][01070] Environment doom_dm already registered, overwriting...
1910
+ [2024-09-06 09:06:01,293][01070] Environment doom_dwango5 already registered, overwriting...
1911
+ [2024-09-06 09:06:01,295][01070] Environment doom_my_way_home_flat_actions already registered, overwriting...
1912
+ [2024-09-06 09:06:01,296][01070] Environment doom_defend_the_center_flat_actions already registered, overwriting...
1913
+ [2024-09-06 09:06:01,298][01070] Environment doom_my_way_home already registered, overwriting...
1914
+ [2024-09-06 09:06:01,300][01070] Environment doom_deadly_corridor already registered, overwriting...
1915
+ [2024-09-06 09:06:01,302][01070] Environment doom_defend_the_center already registered, overwriting...
1916
+ [2024-09-06 09:06:01,304][01070] Environment doom_defend_the_line already registered, overwriting...
1917
+ [2024-09-06 09:06:01,306][01070] Environment doom_health_gathering already registered, overwriting...
1918
+ [2024-09-06 09:06:01,308][01070] Environment doom_health_gathering_supreme already registered, overwriting...
1919
+ [2024-09-06 09:06:01,310][01070] Environment doom_battle already registered, overwriting...
1920
+ [2024-09-06 09:06:01,313][01070] Environment doom_battle2 already registered, overwriting...
1921
+ [2024-09-06 09:06:01,315][01070] Environment doom_duel_bots already registered, overwriting...
1922
+ [2024-09-06 09:06:01,316][01070] Environment doom_deathmatch_bots already registered, overwriting...
1923
+ [2024-09-06 09:06:01,318][01070] Environment doom_duel already registered, overwriting...
1924
+ [2024-09-06 09:06:01,320][01070] Environment doom_deathmatch_full already registered, overwriting...
1925
+ [2024-09-06 09:06:01,322][01070] Environment doom_benchmark already registered, overwriting...
1926
+ [2024-09-06 09:06:01,323][01070] register_encoder_factory: <function make_vizdoom_encoder at 0x78dc5537e170>
1927
+ [2024-09-06 09:06:01,341][01070] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1928
+ [2024-09-06 09:06:01,342][01070] Overriding arg 'train_for_env_steps' with value 7500000 passed from command line
1929
+ [2024-09-06 09:06:01,349][01070] Experiment dir /content/train_dir/default_experiment already exists!
1930
+ [2024-09-06 09:06:01,350][01070] Resuming existing experiment from /content/train_dir/default_experiment...
1931
+ [2024-09-06 09:06:01,352][01070] Weights and Biases integration disabled
1932
+ [2024-09-06 09:06:01,355][01070] Environment var CUDA_VISIBLE_DEVICES is 0
1933
+
1934
+ [2024-09-06 09:06:03,516][01070] Starting experiment with the following configuration:
1935
+ help=False
1936
+ algo=APPO
1937
+ env=doom_health_gathering_supreme
1938
+ experiment=default_experiment
1939
+ train_dir=/content/train_dir
1940
+ restart_behavior=resume
1941
+ device=gpu
1942
+ seed=None
1943
+ num_policies=1
1944
+ async_rl=True
1945
+ serial_mode=False
1946
+ batched_sampling=False
1947
+ num_batches_to_accumulate=2
1948
+ worker_num_splits=2
1949
+ policy_workers_per_policy=1
1950
+ max_policy_lag=1000
1951
+ num_workers=8
1952
+ num_envs_per_worker=4
1953
+ batch_size=1024
1954
+ num_batches_per_epoch=1
1955
+ num_epochs=1
1956
+ rollout=32
1957
+ recurrence=32
1958
+ shuffle_minibatches=False
1959
+ gamma=0.99
1960
+ reward_scale=1.0
1961
+ reward_clip=1000.0
1962
+ value_bootstrap=False
1963
+ normalize_returns=True
1964
+ exploration_loss_coeff=0.001
1965
+ value_loss_coeff=0.5
1966
+ kl_loss_coeff=0.0
1967
+ exploration_loss=symmetric_kl
1968
+ gae_lambda=0.95
1969
+ ppo_clip_ratio=0.1
1970
+ ppo_clip_value=0.2
1971
+ with_vtrace=False
1972
+ vtrace_rho=1.0
1973
+ vtrace_c=1.0
1974
+ optimizer=adam
1975
+ adam_eps=1e-06
1976
+ adam_beta1=0.9
1977
+ adam_beta2=0.999
1978
+ max_grad_norm=4.0
1979
+ learning_rate=0.0001
1980
+ lr_schedule=constant
1981
+ lr_schedule_kl_threshold=0.008
1982
+ lr_adaptive_min=1e-06
1983
+ lr_adaptive_max=0.01
1984
+ obs_subtract_mean=0.0
1985
+ obs_scale=255.0
1986
+ normalize_input=True
1987
+ normalize_input_keys=None
1988
+ decorrelate_experience_max_seconds=0
1989
+ decorrelate_envs_on_one_worker=True
1990
+ actor_worker_gpus=[]
1991
+ set_workers_cpu_affinity=True
1992
+ force_envs_single_thread=False
1993
+ default_niceness=0
1994
+ log_to_file=True
1995
+ experiment_summaries_interval=10
1996
+ flush_summaries_interval=30
1997
+ stats_avg=100
1998
+ summaries_use_frameskip=True
1999
+ heartbeat_interval=20
2000
+ heartbeat_reporting_interval=600
2001
+ train_for_env_steps=7500000
2002
+ train_for_seconds=10000000000
2003
+ save_every_sec=120
2004
+ keep_checkpoints=2
2005
+ load_checkpoint_kind=latest
2006
+ save_milestones_sec=-1
2007
+ save_best_every_sec=5
2008
+ save_best_metric=reward
2009
+ save_best_after=100000
2010
+ benchmark=False
2011
+ encoder_mlp_layers=[512, 512]
2012
+ encoder_conv_architecture=convnet_simple
2013
+ encoder_conv_mlp_layers=[512]
2014
+ use_rnn=True
2015
+ rnn_size=512
2016
+ rnn_type=gru
2017
+ rnn_num_layers=1
2018
+ decoder_mlp_layers=[]
2019
+ nonlinearity=elu
2020
+ policy_initialization=orthogonal
2021
+ policy_init_gain=1.0
2022
+ actor_critic_share_weights=True
2023
+ adaptive_stddev=True
2024
+ continuous_tanh_scale=0.0
2025
+ initial_stddev=1.0
2026
+ use_env_info_cache=False
2027
+ env_gpu_actions=False
2028
+ env_gpu_observations=True
2029
+ env_frameskip=4
2030
+ env_framestack=1
2031
+ pixel_format=CHW
2032
+ use_record_episode_statistics=False
2033
+ with_wandb=False
2034
+ wandb_user=None
2035
+ wandb_project=sample_factory
2036
+ wandb_group=None
2037
+ wandb_job_type=SF
2038
+ wandb_tags=[]
2039
+ with_pbt=False
2040
+ pbt_mix_policies_in_one_env=True
2041
+ pbt_period_env_steps=5000000
2042
+ pbt_start_mutation=20000000
2043
+ pbt_replace_fraction=0.3
2044
+ pbt_mutation_rate=0.15
2045
+ pbt_replace_reward_gap=0.1
2046
+ pbt_replace_reward_gap_absolute=1e-06
2047
+ pbt_optimize_gamma=False
2048
+ pbt_target_objective=true_objective
2049
+ pbt_perturb_min=1.1
2050
+ pbt_perturb_max=1.5
2051
+ num_agents=-1
2052
+ num_humans=0
2053
+ num_bots=-1
2054
+ start_bot_difficulty=None
2055
+ timelimit=None
2056
+ res_w=128
2057
+ res_h=72
2058
+ wide_aspect_ratio=False
2059
+ eval_env_frameskip=1
2060
+ fps=35
2061
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
2062
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
2063
+ git_hash=unknown
2064
+ git_repo_name=not a git repository
2065
+ [2024-09-06 09:06:03,519][01070] Saving configuration to /content/train_dir/default_experiment/config.json...
2066
+ [2024-09-06 09:06:03,524][01070] Rollout worker 0 uses device cpu
2067
+ [2024-09-06 09:06:03,526][01070] Rollout worker 1 uses device cpu
2068
+ [2024-09-06 09:06:03,527][01070] Rollout worker 2 uses device cpu
2069
+ [2024-09-06 09:06:03,529][01070] Rollout worker 3 uses device cpu
2070
+ [2024-09-06 09:06:03,530][01070] Rollout worker 4 uses device cpu
2071
+ [2024-09-06 09:06:03,531][01070] Rollout worker 5 uses device cpu
2072
+ [2024-09-06 09:06:03,532][01070] Rollout worker 6 uses device cpu
2073
+ [2024-09-06 09:06:03,534][01070] Rollout worker 7 uses device cpu
2074
+ [2024-09-06 09:06:03,608][01070] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2075
+ [2024-09-06 09:06:03,610][01070] InferenceWorker_p0-w0: min num requests: 2
2076
+ [2024-09-06 09:06:03,643][01070] Starting all processes...
2077
+ [2024-09-06 09:06:03,645][01070] Starting process learner_proc0
2078
+ [2024-09-06 09:06:03,693][01070] Starting all processes...
2079
+ [2024-09-06 09:06:03,699][01070] Starting process inference_proc0-0
2080
+ [2024-09-06 09:06:03,700][01070] Starting process rollout_proc0
2081
+ [2024-09-06 09:06:03,701][01070] Starting process rollout_proc1
2082
+ [2024-09-06 09:06:03,701][01070] Starting process rollout_proc2
2083
+ [2024-09-06 09:06:03,702][01070] Starting process rollout_proc3
2084
+ [2024-09-06 09:06:03,702][01070] Starting process rollout_proc4
2085
+ [2024-09-06 09:06:03,702][01070] Starting process rollout_proc5
2086
+ [2024-09-06 09:06:03,702][01070] Starting process rollout_proc6
2087
+ [2024-09-06 09:06:03,702][01070] Starting process rollout_proc7
2088
+ [2024-09-06 09:06:20,401][26905] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2089
+ [2024-09-06 09:06:20,401][26905] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
2090
+ [2024-09-06 09:06:20,451][26905] Num visible devices: 1
2091
+ [2024-09-06 09:06:20,487][26905] Starting seed is not provided
2092
+ [2024-09-06 09:06:20,488][26905] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2093
+ [2024-09-06 09:06:20,489][26905] Initializing actor-critic model on device cuda:0
2094
+ [2024-09-06 09:06:20,489][26905] RunningMeanStd input shape: (3, 72, 128)
2095
+ [2024-09-06 09:06:20,491][26905] RunningMeanStd input shape: (1,)
2096
+ [2024-09-06 09:06:20,545][26905] ConvEncoder: input_channels=3
2097
+ [2024-09-06 09:06:20,881][26918] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2098
+ [2024-09-06 09:06:20,882][26918] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
2099
+ [2024-09-06 09:06:20,909][26926] Worker 7 uses CPU cores [1]
2100
+ [2024-09-06 09:06:20,990][26918] Num visible devices: 1
2101
+ [2024-09-06 09:06:21,196][26923] Worker 4 uses CPU cores [0]
2102
+ [2024-09-06 09:06:21,222][26924] Worker 5 uses CPU cores [1]
2103
+ [2024-09-06 09:06:21,235][26919] Worker 0 uses CPU cores [0]
2104
+ [2024-09-06 09:06:21,331][26920] Worker 1 uses CPU cores [1]
2105
+ [2024-09-06 09:06:21,331][26921] Worker 2 uses CPU cores [0]
2106
+ [2024-09-06 09:06:21,344][26905] Conv encoder output size: 512
2107
+ [2024-09-06 09:06:21,345][26905] Policy head output size: 512
2108
+ [2024-09-06 09:06:21,393][26922] Worker 3 uses CPU cores [1]
2109
+ [2024-09-06 09:06:21,395][26905] Created Actor Critic model with architecture:
2110
+ [2024-09-06 09:06:21,395][26905] ActorCriticSharedWeights(
2111
+ (obs_normalizer): ObservationNormalizer(
2112
+ (running_mean_std): RunningMeanStdDictInPlace(
2113
+ (running_mean_std): ModuleDict(
2114
+ (obs): RunningMeanStdInPlace()
2115
+ )
2116
+ )
2117
+ )
2118
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
2119
+ (encoder): VizdoomEncoder(
2120
+ (basic_encoder): ConvEncoder(
2121
+ (enc): RecursiveScriptModule(
2122
+ original_name=ConvEncoderImpl
2123
+ (conv_head): RecursiveScriptModule(
2124
+ original_name=Sequential
2125
+ (0): RecursiveScriptModule(original_name=Conv2d)
2126
+ (1): RecursiveScriptModule(original_name=ELU)
2127
+ (2): RecursiveScriptModule(original_name=Conv2d)
2128
+ (3): RecursiveScriptModule(original_name=ELU)
2129
+ (4): RecursiveScriptModule(original_name=Conv2d)
2130
+ (5): RecursiveScriptModule(original_name=ELU)
2131
+ )
2132
+ (mlp_layers): RecursiveScriptModule(
2133
+ original_name=Sequential
2134
+ (0): RecursiveScriptModule(original_name=Linear)
2135
+ (1): RecursiveScriptModule(original_name=ELU)
2136
+ )
2137
+ )
2138
+ )
2139
+ )
2140
+ (core): ModelCoreRNN(
2141
+ (core): GRU(512, 512)
2142
+ )
2143
+ (decoder): MlpDecoder(
2144
+ (mlp): Identity()
2145
+ )
2146
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
2147
+ (action_parameterization): ActionParameterizationDefault(
2148
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
2149
+ )
2150
+ )
2151
+ [2024-09-06 09:06:21,429][26925] Worker 6 uses CPU cores [0]
2152
+ [2024-09-06 09:06:21,538][26905] Using optimizer <class 'torch.optim.adam.Adam'>
2153
+ [2024-09-06 09:06:22,136][26905] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
2154
+ [2024-09-06 09:06:22,176][26905] Loading model from checkpoint
2155
+ [2024-09-06 09:06:22,178][26905] Loaded experiment state at self.train_step=1222, self.env_steps=5005312
2156
+ [2024-09-06 09:06:22,178][26905] Initialized policy 0 weights for model version 1222
2157
+ [2024-09-06 09:06:22,183][26905] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2158
+ [2024-09-06 09:06:22,190][26905] LearnerWorker_p0 finished initialization!
2159
+ [2024-09-06 09:06:22,275][26918] RunningMeanStd input shape: (3, 72, 128)
2160
+ [2024-09-06 09:06:22,276][26918] RunningMeanStd input shape: (1,)
2161
+ [2024-09-06 09:06:22,288][26918] ConvEncoder: input_channels=3
2162
+ [2024-09-06 09:06:22,391][26918] Conv encoder output size: 512
2163
+ [2024-09-06 09:06:22,391][26918] Policy head output size: 512
2164
+ [2024-09-06 09:06:22,443][01070] Inference worker 0-0 is ready!
2165
+ [2024-09-06 09:06:22,445][01070] All inference workers are ready! Signal rollout workers to start!
2166
+ [2024-09-06 09:06:22,677][26920] Doom resolution: 160x120, resize resolution: (128, 72)
2167
+ [2024-09-06 09:06:22,689][26922] Doom resolution: 160x120, resize resolution: (128, 72)
2168
+ [2024-09-06 09:06:22,692][26924] Doom resolution: 160x120, resize resolution: (128, 72)
2169
+ [2024-09-06 09:06:22,683][26926] Doom resolution: 160x120, resize resolution: (128, 72)
2170
+ [2024-09-06 09:06:22,720][26919] Doom resolution: 160x120, resize resolution: (128, 72)
2171
+ [2024-09-06 09:06:22,725][26925] Doom resolution: 160x120, resize resolution: (128, 72)
2172
+ [2024-09-06 09:06:22,727][26921] Doom resolution: 160x120, resize resolution: (128, 72)
2173
+ [2024-09-06 09:06:22,715][26923] Doom resolution: 160x120, resize resolution: (128, 72)
2174
+ [2024-09-06 09:06:23,532][26919] Decorrelating experience for 0 frames...
2175
+ [2024-09-06 09:06:23,600][01070] Heartbeat connected on Batcher_0
2176
+ [2024-09-06 09:06:23,605][01070] Heartbeat connected on LearnerWorker_p0
2177
+ [2024-09-06 09:06:23,634][01070] Heartbeat connected on InferenceWorker_p0-w0
2178
+ [2024-09-06 09:06:23,927][26923] Decorrelating experience for 0 frames...
2179
+ [2024-09-06 09:06:24,411][26922] Decorrelating experience for 0 frames...
2180
+ [2024-09-06 09:06:24,421][26924] Decorrelating experience for 0 frames...
2181
+ [2024-09-06 09:06:24,414][26920] Decorrelating experience for 0 frames...
2182
+ [2024-09-06 09:06:24,428][26926] Decorrelating experience for 0 frames...
2183
+ [2024-09-06 09:06:26,027][26919] Decorrelating experience for 32 frames...
2184
+ [2024-09-06 09:06:26,059][26921] Decorrelating experience for 0 frames...
2185
+ [2024-09-06 09:06:26,129][26922] Decorrelating experience for 32 frames...
2186
+ [2024-09-06 09:06:26,130][26920] Decorrelating experience for 32 frames...
2187
+ [2024-09-06 09:06:26,135][26924] Decorrelating experience for 32 frames...
2188
+ [2024-09-06 09:06:26,142][26926] Decorrelating experience for 32 frames...
2189
+ [2024-09-06 09:06:26,359][01070] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 5005312. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2190
+ [2024-09-06 09:06:27,542][26921] Decorrelating experience for 32 frames...
2191
+ [2024-09-06 09:06:27,689][26923] Decorrelating experience for 32 frames...
2192
+ [2024-09-06 09:06:27,841][26920] Decorrelating experience for 64 frames...
2193
+ [2024-09-06 09:06:28,170][26919] Decorrelating experience for 64 frames...
2194
+ [2024-09-06 09:06:29,208][26925] Decorrelating experience for 0 frames...
2195
+ [2024-09-06 09:06:29,215][26922] Decorrelating experience for 64 frames...
2196
+ [2024-09-06 09:06:29,356][26921] Decorrelating experience for 64 frames...
2197
+ [2024-09-06 09:06:29,802][26924] Decorrelating experience for 64 frames...
2198
+ [2024-09-06 09:06:29,918][26920] Decorrelating experience for 96 frames...
2199
+ [2024-09-06 09:06:30,160][01070] Heartbeat connected on RolloutWorker_w1
2200
+ [2024-09-06 09:06:30,247][26919] Decorrelating experience for 96 frames...
2201
+ [2024-09-06 09:06:30,413][01070] Heartbeat connected on RolloutWorker_w0
2202
+ [2024-09-06 09:06:30,668][26921] Decorrelating experience for 96 frames...
2203
+ [2024-09-06 09:06:30,864][01070] Heartbeat connected on RolloutWorker_w2
2204
+ [2024-09-06 09:06:30,935][26926] Decorrelating experience for 64 frames...
2205
+ [2024-09-06 09:06:31,356][01070] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 5005312. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2206
+ [2024-09-06 09:06:31,362][01070] Avg episode reward: [(0, '0.380')]
2207
+ [2024-09-06 09:06:31,506][26924] Decorrelating experience for 96 frames...
2208
+ [2024-09-06 09:06:31,519][26923] Decorrelating experience for 64 frames...
2209
+ [2024-09-06 09:06:31,875][01070] Heartbeat connected on RolloutWorker_w5
2210
+ [2024-09-06 09:06:33,162][26926] Decorrelating experience for 96 frames...
2211
+ [2024-09-06 09:06:33,550][01070] Heartbeat connected on RolloutWorker_w7
2212
+ [2024-09-06 09:06:33,915][26922] Decorrelating experience for 96 frames...
2213
+ [2024-09-06 09:06:34,378][01070] Heartbeat connected on RolloutWorker_w3
2214
+ [2024-09-06 09:06:34,636][26905] Signal inference workers to stop experience collection...
2215
+ [2024-09-06 09:06:34,665][26918] InferenceWorker_p0-w0: stopping experience collection
2216
+ [2024-09-06 09:06:34,723][26923] Decorrelating experience for 96 frames...
2217
+ [2024-09-06 09:06:34,840][01070] Heartbeat connected on RolloutWorker_w4
2218
+ [2024-09-06 09:06:35,103][26925] Decorrelating experience for 32 frames...
2219
+ [2024-09-06 09:06:35,634][26925] Decorrelating experience for 64 frames...
2220
+ [2024-09-06 09:06:36,059][26925] Decorrelating experience for 96 frames...
2221
+ [2024-09-06 09:06:36,142][01070] Heartbeat connected on RolloutWorker_w6
2222
+ [2024-09-06 09:06:36,355][01070] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 5005312. Throughput: 0: 218.7. Samples: 2186. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2223
+ [2024-09-06 09:06:36,358][01070] Avg episode reward: [(0, '5.207')]
2224
+ [2024-09-06 09:06:37,319][26905] Signal inference workers to resume experience collection...
2225
+ [2024-09-06 09:06:37,321][26918] InferenceWorker_p0-w0: resuming experience collection
2226
+ [2024-09-06 09:06:41,356][01070] Fps is (10 sec: 2048.0, 60 sec: 1365.6, 300 sec: 1365.6). Total num frames: 5025792. Throughput: 0: 364.1. Samples: 5460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2227
+ [2024-09-06 09:06:41,358][01070] Avg episode reward: [(0, '6.203')]
2228
+ [2024-09-06 09:06:46,356][01070] Fps is (10 sec: 3276.8, 60 sec: 1638.6, 300 sec: 1638.6). Total num frames: 5038080. Throughput: 0: 379.1. Samples: 7580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2229
+ [2024-09-06 09:06:46,366][01070] Avg episode reward: [(0, '11.223')]
2230
+ [2024-09-06 09:06:47,757][26918] Updated weights for policy 0, policy_version 1232 (0.0226)
2231
+ [2024-09-06 09:06:51,356][01070] Fps is (10 sec: 3686.3, 60 sec: 2294.0, 300 sec: 2294.0). Total num frames: 5062656. Throughput: 0: 512.5. Samples: 12812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2232
+ [2024-09-06 09:06:51,362][01070] Avg episode reward: [(0, '17.027')]
2233
+ [2024-09-06 09:06:56,356][01070] Fps is (10 sec: 4096.0, 60 sec: 2457.8, 300 sec: 2457.8). Total num frames: 5079040. Throughput: 0: 637.0. Samples: 19108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2234
+ [2024-09-06 09:06:56,358][01070] Avg episode reward: [(0, '18.689')]
2235
+ [2024-09-06 09:06:57,557][26918] Updated weights for policy 0, policy_version 1242 (0.0039)
2236
+ [2024-09-06 09:07:01,360][01070] Fps is (10 sec: 3275.6, 60 sec: 2574.6, 300 sec: 2574.6). Total num frames: 5095424. Throughput: 0: 621.9. Samples: 21768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2237
+ [2024-09-06 09:07:01,363][01070] Avg episode reward: [(0, '20.324')]
2238
+ [2024-09-06 09:07:06,356][01070] Fps is (10 sec: 3686.4, 60 sec: 2765.0, 300 sec: 2765.0). Total num frames: 5115904. Throughput: 0: 649.8. Samples: 25990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2239
+ [2024-09-06 09:07:06,357][01070] Avg episode reward: [(0, '21.032')]
2240
+ [2024-09-06 09:07:08,828][26918] Updated weights for policy 0, policy_version 1252 (0.0020)
2241
+ [2024-09-06 09:07:11,356][01070] Fps is (10 sec: 4097.5, 60 sec: 2912.9, 300 sec: 2912.9). Total num frames: 5136384. Throughput: 0: 737.4. Samples: 33182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2242
+ [2024-09-06 09:07:11,364][01070] Avg episode reward: [(0, '25.508')]
2243
+ [2024-09-06 09:07:16,356][01070] Fps is (10 sec: 4095.7, 60 sec: 3031.2, 300 sec: 3031.2). Total num frames: 5156864. Throughput: 0: 816.6. Samples: 36746. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2244
+ [2024-09-06 09:07:16,362][01070] Avg episode reward: [(0, '26.378')]
2245
+ [2024-09-06 09:07:19,470][26918] Updated weights for policy 0, policy_version 1262 (0.0038)
2246
+ [2024-09-06 09:07:21,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3053.5, 300 sec: 3053.5). Total num frames: 5173248. Throughput: 0: 874.2. Samples: 41524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2247
+ [2024-09-06 09:07:21,358][01070] Avg episode reward: [(0, '25.084')]
2248
+ [2024-09-06 09:07:26,356][01070] Fps is (10 sec: 3686.6, 60 sec: 3140.4, 300 sec: 3140.4). Total num frames: 5193728. Throughput: 0: 937.4. Samples: 47642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2249
+ [2024-09-06 09:07:26,358][01070] Avg episode reward: [(0, '24.867')]
2250
+ [2024-09-06 09:07:28,996][26918] Updated weights for policy 0, policy_version 1272 (0.0035)
2251
+ [2024-09-06 09:07:31,356][01070] Fps is (10 sec: 4505.7, 60 sec: 3549.9, 300 sec: 3277.0). Total num frames: 5218304. Throughput: 0: 967.2. Samples: 51102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2252
+ [2024-09-06 09:07:31,358][01070] Avg episode reward: [(0, '24.314')]
2253
+ [2024-09-06 09:07:36,356][01070] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3276.9). Total num frames: 5234688. Throughput: 0: 981.9. Samples: 56998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2254
+ [2024-09-06 09:07:36,360][01070] Avg episode reward: [(0, '22.614')]
2255
+ [2024-09-06 09:07:40,320][26918] Updated weights for policy 0, policy_version 1282 (0.0028)
2256
+ [2024-09-06 09:07:41,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3331.5). Total num frames: 5255168. Throughput: 0: 955.6. Samples: 62112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2257
+ [2024-09-06 09:07:41,358][01070] Avg episode reward: [(0, '21.784')]
2258
+ [2024-09-06 09:07:46,356][01070] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3430.5). Total num frames: 5279744. Throughput: 0: 978.5. Samples: 65798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2259
+ [2024-09-06 09:07:46,358][01070] Avg episode reward: [(0, '22.907')]
2260
+ [2024-09-06 09:07:48,978][26918] Updated weights for policy 0, policy_version 1292 (0.0023)
2261
+ [2024-09-06 09:07:51,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3421.5). Total num frames: 5296128. Throughput: 0: 1038.5. Samples: 72724. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2262
+ [2024-09-06 09:07:51,363][01070] Avg episode reward: [(0, '23.940')]
2263
+ [2024-09-06 09:07:56,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3413.4). Total num frames: 5312512. Throughput: 0: 974.1. Samples: 77018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2264
+ [2024-09-06 09:07:56,358][01070] Avg episode reward: [(0, '23.407')]
2265
+ [2024-09-06 09:08:00,195][26918] Updated weights for policy 0, policy_version 1302 (0.0029)
2266
+ [2024-09-06 09:08:01,356][01070] Fps is (10 sec: 4096.0, 60 sec: 4028.0, 300 sec: 3492.5). Total num frames: 5337088. Throughput: 0: 964.8. Samples: 80160. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2267
+ [2024-09-06 09:08:01,362][01070] Avg episode reward: [(0, '23.887')]
2268
+ [2024-09-06 09:08:01,369][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001303_5337088.pth...
2269
+ [2024-09-06 09:08:01,513][26905] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001172_4800512.pth
2270
+ [2024-09-06 09:08:06,356][01070] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3563.6). Total num frames: 5361664. Throughput: 0: 1017.4. Samples: 87308. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2271
+ [2024-09-06 09:08:06,363][01070] Avg episode reward: [(0, '25.013')]
2272
+ [2024-09-06 09:08:10,733][26918] Updated weights for policy 0, policy_version 1312 (0.0019)
2273
+ [2024-09-06 09:08:11,358][01070] Fps is (10 sec: 3685.5, 60 sec: 3959.3, 300 sec: 3510.9). Total num frames: 5373952. Throughput: 0: 993.7. Samples: 92362. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2274
+ [2024-09-06 09:08:11,364][01070] Avg episode reward: [(0, '24.217')]
2275
+ [2024-09-06 09:08:16,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3537.5). Total num frames: 5394432. Throughput: 0: 965.2. Samples: 94536. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2276
+ [2024-09-06 09:08:16,363][01070] Avg episode reward: [(0, '23.685')]
2277
+ [2024-09-06 09:08:20,626][26918] Updated weights for policy 0, policy_version 1322 (0.0015)
2278
+ [2024-09-06 09:08:21,356][01070] Fps is (10 sec: 4096.9, 60 sec: 4027.8, 300 sec: 3561.8). Total num frames: 5414912. Throughput: 0: 991.2. Samples: 101604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2279
+ [2024-09-06 09:08:21,365][01070] Avg episode reward: [(0, '23.700')]
2280
+ [2024-09-06 09:08:26,356][01070] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3584.1). Total num frames: 5435392. Throughput: 0: 1018.9. Samples: 107962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2281
+ [2024-09-06 09:08:26,358][01070] Avg episode reward: [(0, '22.939')]
2282
+ [2024-09-06 09:08:31,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3571.8). Total num frames: 5451776. Throughput: 0: 982.6. Samples: 110016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2283
+ [2024-09-06 09:08:31,362][01070] Avg episode reward: [(0, '21.481')]
2284
+ [2024-09-06 09:08:32,058][26918] Updated weights for policy 0, policy_version 1332 (0.0022)
2285
+ [2024-09-06 09:08:36,356][01070] Fps is (10 sec: 4096.1, 60 sec: 4027.7, 300 sec: 3623.5). Total num frames: 5476352. Throughput: 0: 964.4. Samples: 116120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2286
+ [2024-09-06 09:08:36,358][01070] Avg episode reward: [(0, '21.845')]
2287
+ [2024-09-06 09:08:40,559][26918] Updated weights for policy 0, policy_version 1342 (0.0036)
2288
+ [2024-09-06 09:08:41,357][01070] Fps is (10 sec: 4504.8, 60 sec: 4027.6, 300 sec: 3640.9). Total num frames: 5496832. Throughput: 0: 1029.1. Samples: 123328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2289
+ [2024-09-06 09:08:41,362][01070] Avg episode reward: [(0, '22.820')]
2290
+ [2024-09-06 09:08:46,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3628.0). Total num frames: 5513216. Throughput: 0: 1012.0. Samples: 125700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2291
+ [2024-09-06 09:08:46,358][01070] Avg episode reward: [(0, '24.266')]
2292
+ [2024-09-06 09:08:51,356][01070] Fps is (10 sec: 3687.1, 60 sec: 3959.5, 300 sec: 3644.1). Total num frames: 5533696. Throughput: 0: 961.3. Samples: 130566. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2293
+ [2024-09-06 09:08:51,359][01070] Avg episode reward: [(0, '26.457')]
2294
+ [2024-09-06 09:08:52,046][26918] Updated weights for policy 0, policy_version 1352 (0.0038)
2295
+ [2024-09-06 09:08:56,355][01070] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3686.5). Total num frames: 5558272. Throughput: 0: 1008.9. Samples: 137758. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2296
+ [2024-09-06 09:08:56,363][01070] Avg episode reward: [(0, '26.386')]
2297
+ [2024-09-06 09:09:01,358][01070] Fps is (10 sec: 4095.1, 60 sec: 3959.3, 300 sec: 3673.2). Total num frames: 5574656. Throughput: 0: 1032.2. Samples: 140988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2298
+ [2024-09-06 09:09:01,363][01070] Avg episode reward: [(0, '26.784')]
2299
+ [2024-09-06 09:09:01,373][26905] Saving new best policy, reward=26.784!
2300
+ [2024-09-06 09:09:02,724][26918] Updated weights for policy 0, policy_version 1362 (0.0027)
2301
+ [2024-09-06 09:09:06,356][01070] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3635.3). Total num frames: 5586944. Throughput: 0: 968.6. Samples: 145192. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2302
+ [2024-09-06 09:09:06,359][01070] Avg episode reward: [(0, '26.871')]
2303
+ [2024-09-06 09:09:06,422][26905] Saving new best policy, reward=26.871!
2304
+ [2024-09-06 09:09:11,356][01070] Fps is (10 sec: 3687.2, 60 sec: 3959.6, 300 sec: 3674.1). Total num frames: 5611520. Throughput: 0: 970.1. Samples: 151618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2305
+ [2024-09-06 09:09:11,362][01070] Avg episode reward: [(0, '26.221')]
2306
+ [2024-09-06 09:09:12,605][26918] Updated weights for policy 0, policy_version 1372 (0.0024)
2307
+ [2024-09-06 09:09:16,361][01070] Fps is (10 sec: 4912.6, 60 sec: 4027.4, 300 sec: 3710.4). Total num frames: 5636096. Throughput: 0: 1003.0. Samples: 155156. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2308
+ [2024-09-06 09:09:16,368][01070] Avg episode reward: [(0, '25.283')]
2309
+ [2024-09-06 09:09:21,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3674.8). Total num frames: 5648384. Throughput: 0: 987.0. Samples: 160534. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2310
+ [2024-09-06 09:09:21,361][01070] Avg episode reward: [(0, '25.122')]
2311
+ [2024-09-06 09:09:24,141][26918] Updated weights for policy 0, policy_version 1382 (0.0031)
2312
+ [2024-09-06 09:09:26,356][01070] Fps is (10 sec: 3278.5, 60 sec: 3891.2, 300 sec: 3686.5). Total num frames: 5668864. Throughput: 0: 951.7. Samples: 166152. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2313
+ [2024-09-06 09:09:26,358][01070] Avg episode reward: [(0, '25.810')]
2314
+ [2024-09-06 09:09:31,356][01070] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3719.7). Total num frames: 5693440. Throughput: 0: 974.4. Samples: 169550. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2315
+ [2024-09-06 09:09:31,362][01070] Avg episode reward: [(0, '24.912')]
2316
+ [2024-09-06 09:09:32,863][26918] Updated weights for policy 0, policy_version 1392 (0.0021)
2317
+ [2024-09-06 09:09:36,357][01070] Fps is (10 sec: 4095.7, 60 sec: 3891.1, 300 sec: 3708.0). Total num frames: 5709824. Throughput: 0: 1008.7. Samples: 175958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2318
+ [2024-09-06 09:09:36,359][01070] Avg episode reward: [(0, '25.858')]
2319
+ [2024-09-06 09:09:41,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3823.1, 300 sec: 3697.0). Total num frames: 5726208. Throughput: 0: 948.8. Samples: 180456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2320
+ [2024-09-06 09:09:41,361][01070] Avg episode reward: [(0, '25.156')]
2321
+ [2024-09-06 09:09:44,121][26918] Updated weights for policy 0, policy_version 1402 (0.0019)
2322
+ [2024-09-06 09:09:46,356][01070] Fps is (10 sec: 4096.4, 60 sec: 3959.4, 300 sec: 3727.4). Total num frames: 5750784. Throughput: 0: 955.1. Samples: 183966. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2323
+ [2024-09-06 09:09:46,362][01070] Avg episode reward: [(0, '24.774')]
2324
+ [2024-09-06 09:09:51,356][01070] Fps is (10 sec: 4505.5, 60 sec: 3959.4, 300 sec: 3736.4). Total num frames: 5771264. Throughput: 0: 1022.3. Samples: 191196. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2325
+ [2024-09-06 09:09:51,358][01070] Avg episode reward: [(0, '25.942')]
2326
+ [2024-09-06 09:09:54,349][26918] Updated weights for policy 0, policy_version 1412 (0.0027)
2327
+ [2024-09-06 09:09:56,356][01070] Fps is (10 sec: 3686.2, 60 sec: 3822.9, 300 sec: 3725.4). Total num frames: 5787648. Throughput: 0: 982.8. Samples: 195844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2328
+ [2024-09-06 09:09:56,359][01070] Avg episode reward: [(0, '25.874')]
2329
+ [2024-09-06 09:10:01,356][01070] Fps is (10 sec: 2867.3, 60 sec: 3754.8, 300 sec: 3696.0). Total num frames: 5799936. Throughput: 0: 943.3. Samples: 197598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2330
+ [2024-09-06 09:10:01,363][01070] Avg episode reward: [(0, '26.130')]
2331
+ [2024-09-06 09:10:01,378][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001416_5799936.pth...
2332
+ [2024-09-06 09:10:01,573][26905] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth
2333
+ [2024-09-06 09:10:06,356][01070] Fps is (10 sec: 2867.4, 60 sec: 3822.9, 300 sec: 3686.5). Total num frames: 5816320. Throughput: 0: 924.6. Samples: 202142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2334
+ [2024-09-06 09:10:06,357][01070] Avg episode reward: [(0, '25.358')]
2335
+ [2024-09-06 09:10:07,253][26918] Updated weights for policy 0, policy_version 1422 (0.0021)
2336
+ [2024-09-06 09:10:11,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3695.6). Total num frames: 5836800. Throughput: 0: 948.5. Samples: 208836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2337
+ [2024-09-06 09:10:11,366][01070] Avg episode reward: [(0, '26.176')]
2338
+ [2024-09-06 09:10:16,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3618.5, 300 sec: 3686.4). Total num frames: 5853184. Throughput: 0: 920.9. Samples: 210992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2339
+ [2024-09-06 09:10:16,358][01070] Avg episode reward: [(0, '25.004')]
2340
+ [2024-09-06 09:10:18,768][26918] Updated weights for policy 0, policy_version 1432 (0.0026)
2341
+ [2024-09-06 09:10:21,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3712.6). Total num frames: 5877760. Throughput: 0: 901.2. Samples: 216510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2342
+ [2024-09-06 09:10:21,363][01070] Avg episode reward: [(0, '25.456')]
2343
+ [2024-09-06 09:10:26,355][01070] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3737.6). Total num frames: 5902336. Throughput: 0: 961.4. Samples: 223718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2344
+ [2024-09-06 09:10:26,360][01070] Avg episode reward: [(0, '24.545')]
2345
+ [2024-09-06 09:10:27,436][26918] Updated weights for policy 0, policy_version 1442 (0.0016)
2346
+ [2024-09-06 09:10:31,360][01070] Fps is (10 sec: 3684.9, 60 sec: 3686.2, 300 sec: 3711.5). Total num frames: 5914624. Throughput: 0: 949.5. Samples: 226696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2347
+ [2024-09-06 09:10:31,370][01070] Avg episode reward: [(0, '25.006')]
2348
+ [2024-09-06 09:10:36,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3719.2). Total num frames: 5935104. Throughput: 0: 885.6. Samples: 231046. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2349
+ [2024-09-06 09:10:36,358][01070] Avg episode reward: [(0, '25.665')]
2350
+ [2024-09-06 09:10:38,547][26918] Updated weights for policy 0, policy_version 1452 (0.0016)
2351
+ [2024-09-06 09:10:41,356][01070] Fps is (10 sec: 4507.4, 60 sec: 3891.2, 300 sec: 3742.7). Total num frames: 5959680. Throughput: 0: 940.1. Samples: 238146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2352
+ [2024-09-06 09:10:41,357][01070] Avg episode reward: [(0, '25.912')]
2353
+ [2024-09-06 09:10:46,356][01070] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3749.5). Total num frames: 5980160. Throughput: 0: 981.6. Samples: 241772. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
2354
+ [2024-09-06 09:10:46,362][01070] Avg episode reward: [(0, '25.585')]
2355
+ [2024-09-06 09:10:49,005][26918] Updated weights for policy 0, policy_version 1462 (0.0028)
2356
+ [2024-09-06 09:10:51,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3725.1). Total num frames: 5992448. Throughput: 0: 986.2. Samples: 246522. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2357
+ [2024-09-06 09:10:51,364][01070] Avg episode reward: [(0, '25.632')]
2358
+ [2024-09-06 09:10:56,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3747.1). Total num frames: 6017024. Throughput: 0: 971.8. Samples: 252568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2359
+ [2024-09-06 09:10:56,363][01070] Avg episode reward: [(0, '27.421')]
2360
+ [2024-09-06 09:10:56,365][26905] Saving new best policy, reward=27.421!
2361
+ [2024-09-06 09:10:59,150][26918] Updated weights for policy 0, policy_version 1472 (0.0039)
2362
+ [2024-09-06 09:11:01,356][01070] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3753.5). Total num frames: 6037504. Throughput: 0: 998.2. Samples: 255912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2363
+ [2024-09-06 09:11:01,358][01070] Avg episode reward: [(0, '26.535')]
2364
+ [2024-09-06 09:11:06,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3745.0). Total num frames: 6053888. Throughput: 0: 997.8. Samples: 261410. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2365
+ [2024-09-06 09:11:06,359][01070] Avg episode reward: [(0, '25.905')]
2366
+ [2024-09-06 09:11:10,964][26918] Updated weights for policy 0, policy_version 1482 (0.0037)
2367
+ [2024-09-06 09:11:11,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3736.7). Total num frames: 6070272. Throughput: 0: 944.5. Samples: 266222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2368
+ [2024-09-06 09:11:11,358][01070] Avg episode reward: [(0, '26.962')]
2369
+ [2024-09-06 09:11:16,356][01070] Fps is (10 sec: 4095.7, 60 sec: 4027.7, 300 sec: 3757.1). Total num frames: 6094848. Throughput: 0: 958.3. Samples: 269818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2370
+ [2024-09-06 09:11:16,360][01070] Avg episode reward: [(0, '26.222')]
2371
+ [2024-09-06 09:11:19,722][26918] Updated weights for policy 0, policy_version 1492 (0.0021)
2372
+ [2024-09-06 09:11:21,356][01070] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3762.8). Total num frames: 6115328. Throughput: 0: 1021.4. Samples: 277008. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2373
+ [2024-09-06 09:11:21,363][01070] Avg episode reward: [(0, '26.579')]
2374
+ [2024-09-06 09:11:26,356][01070] Fps is (10 sec: 3277.0, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 6127616. Throughput: 0: 959.7. Samples: 281332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2375
+ [2024-09-06 09:11:26,360][01070] Avg episode reward: [(0, '26.111')]
2376
+ [2024-09-06 09:11:30,986][26918] Updated weights for policy 0, policy_version 1502 (0.0029)
2377
+ [2024-09-06 09:11:31,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3959.7, 300 sec: 3887.7). Total num frames: 6152192. Throughput: 0: 948.0. Samples: 284432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2378
+ [2024-09-06 09:11:31,358][01070] Avg episode reward: [(0, '27.333')]
2379
+ [2024-09-06 09:11:36,356][01070] Fps is (10 sec: 4914.9, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 6176768. Throughput: 0: 998.4. Samples: 291450. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2380
+ [2024-09-06 09:11:36,358][01070] Avg episode reward: [(0, '27.316')]
2381
+ [2024-09-06 09:11:41,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 6189056. Throughput: 0: 983.0. Samples: 296802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2382
+ [2024-09-06 09:11:41,362][01070] Avg episode reward: [(0, '27.328')]
2383
+ [2024-09-06 09:11:41,444][26918] Updated weights for policy 0, policy_version 1512 (0.0034)
2384
+ [2024-09-06 09:11:46,356][01070] Fps is (10 sec: 3277.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 6209536. Throughput: 0: 956.0. Samples: 298930. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2385
+ [2024-09-06 09:11:46,363][01070] Avg episode reward: [(0, '27.148')]
2386
+ [2024-09-06 09:11:51,072][26918] Updated weights for policy 0, policy_version 1522 (0.0025)
2387
+ [2024-09-06 09:11:51,356][01070] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 6234112. Throughput: 0: 990.5. Samples: 305984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2388
+ [2024-09-06 09:11:51,358][01070] Avg episode reward: [(0, '27.216')]
2389
+ [2024-09-06 09:11:56,356][01070] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 6254592. Throughput: 0: 1028.2. Samples: 312490. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2390
+ [2024-09-06 09:11:56,361][01070] Avg episode reward: [(0, '26.377')]
2391
+ [2024-09-06 09:12:01,358][01070] Fps is (10 sec: 3276.0, 60 sec: 3822.8, 300 sec: 3901.6). Total num frames: 6266880. Throughput: 0: 995.6. Samples: 314624. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2392
+ [2024-09-06 09:12:01,361][01070] Avg episode reward: [(0, '26.319')]
2393
+ [2024-09-06 09:12:01,378][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001530_6266880.pth...
2394
+ [2024-09-06 09:12:01,525][26905] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001303_5337088.pth
2395
+ [2024-09-06 09:12:02,686][26918] Updated weights for policy 0, policy_version 1532 (0.0038)
2396
+ [2024-09-06 09:12:06,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 6291456. Throughput: 0: 960.0. Samples: 320206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2397
+ [2024-09-06 09:12:06,363][01070] Avg episode reward: [(0, '26.331')]
2398
+ [2024-09-06 09:12:11,270][26918] Updated weights for policy 0, policy_version 1542 (0.0038)
2399
+ [2024-09-06 09:12:11,356][01070] Fps is (10 sec: 4916.4, 60 sec: 4096.0, 300 sec: 3929.4). Total num frames: 6316032. Throughput: 0: 1024.0. Samples: 327410. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2400
+ [2024-09-06 09:12:11,362][01070] Avg episode reward: [(0, '25.599')]
2401
+ [2024-09-06 09:12:16,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 6328320. Throughput: 0: 1012.3. Samples: 329984. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2402
+ [2024-09-06 09:12:16,364][01070] Avg episode reward: [(0, '26.978')]
2403
+ [2024-09-06 09:12:21,356][01070] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 6348800. Throughput: 0: 960.3. Samples: 334664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2404
+ [2024-09-06 09:12:21,360][01070] Avg episode reward: [(0, '27.058')]
2405
+ [2024-09-06 09:12:22,708][26918] Updated weights for policy 0, policy_version 1552 (0.0022)
2406
+ [2024-09-06 09:12:26,356][01070] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3915.5). Total num frames: 6373376. Throughput: 0: 1002.0. Samples: 341894. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2407
+ [2024-09-06 09:12:26,360][01070] Avg episode reward: [(0, '26.297')]
2408
+ [2024-09-06 09:12:31,356][01070] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 6389760. Throughput: 0: 1035.0. Samples: 345504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2409
+ [2024-09-06 09:12:31,362][01070] Avg episode reward: [(0, '25.448')]
2410
+ [2024-09-06 09:12:32,950][26918] Updated weights for policy 0, policy_version 1562 (0.0029)
2411
+ [2024-09-06 09:12:36,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3901.6). Total num frames: 6406144. Throughput: 0: 975.6. Samples: 349884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2412
+ [2024-09-06 09:12:36,358][01070] Avg episode reward: [(0, '24.471')]
2413
+ [2024-09-06 09:12:41,356][01070] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 6430720. Throughput: 0: 974.3. Samples: 356334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2414
+ [2024-09-06 09:12:41,358][01070] Avg episode reward: [(0, '24.485')]
2415
+ [2024-09-06 09:12:42,679][26918] Updated weights for policy 0, policy_version 1572 (0.0032)
2416
+ [2024-09-06 09:12:46,359][01070] Fps is (10 sec: 4913.6, 60 sec: 4095.8, 300 sec: 3929.3). Total num frames: 6455296. Throughput: 0: 1007.1. Samples: 359942. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2417
+ [2024-09-06 09:12:46,361][01070] Avg episode reward: [(0, '23.821')]
2418
+ [2024-09-06 09:12:51,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 6467584. Throughput: 0: 1006.8. Samples: 365512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2419
+ [2024-09-06 09:12:51,359][01070] Avg episode reward: [(0, '24.404')]
2420
+ [2024-09-06 09:12:54,080][26918] Updated weights for policy 0, policy_version 1582 (0.0035)
2421
+ [2024-09-06 09:12:56,356][01070] Fps is (10 sec: 3277.8, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 6488064. Throughput: 0: 965.9. Samples: 370876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2422
+ [2024-09-06 09:12:56,358][01070] Avg episode reward: [(0, '25.540')]
2423
+ [2024-09-06 09:13:01,356][01070] Fps is (10 sec: 4505.6, 60 sec: 4096.2, 300 sec: 3901.6). Total num frames: 6512640. Throughput: 0: 987.1. Samples: 374404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2424
+ [2024-09-06 09:13:01,360][01070] Avg episode reward: [(0, '25.883')]
2425
+ [2024-09-06 09:13:02,760][26918] Updated weights for policy 0, policy_version 1592 (0.0021)
2426
+ [2024-09-06 09:13:06,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 6529024. Throughput: 0: 1028.7. Samples: 380956. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2427
+ [2024-09-06 09:13:06,359][01070] Avg episode reward: [(0, '27.463')]
2428
+ [2024-09-06 09:13:06,363][26905] Saving new best policy, reward=27.463!
2429
+ [2024-09-06 09:13:11,356][01070] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3887.7). Total num frames: 6541312. Throughput: 0: 953.2. Samples: 384790. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2430
+ [2024-09-06 09:13:11,359][01070] Avg episode reward: [(0, '26.136')]
2431
+ [2024-09-06 09:13:15,232][26918] Updated weights for policy 0, policy_version 1602 (0.0024)
2432
+ [2024-09-06 09:13:16,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 6565888. Throughput: 0: 938.2. Samples: 387722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2433
+ [2024-09-06 09:13:16,358][01070] Avg episode reward: [(0, '26.648')]
2434
+ [2024-09-06 09:13:21,356][01070] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 6586368. Throughput: 0: 990.0. Samples: 394432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2435
+ [2024-09-06 09:13:21,362][01070] Avg episode reward: [(0, '26.222')]
2436
+ [2024-09-06 09:13:26,258][26918] Updated weights for policy 0, policy_version 1612 (0.0029)
2437
+ [2024-09-06 09:13:26,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 6602752. Throughput: 0: 956.8. Samples: 399388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2438
+ [2024-09-06 09:13:26,362][01070] Avg episode reward: [(0, '25.453')]
2439
+ [2024-09-06 09:13:31,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 6619136. Throughput: 0: 922.8. Samples: 401464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2440
+ [2024-09-06 09:13:31,363][01070] Avg episode reward: [(0, '25.128')]
2441
+ [2024-09-06 09:13:36,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 6639616. Throughput: 0: 946.2. Samples: 408090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2442
+ [2024-09-06 09:13:36,362][01070] Avg episode reward: [(0, '24.206')]
2443
+ [2024-09-06 09:13:36,380][26918] Updated weights for policy 0, policy_version 1622 (0.0021)
2444
+ [2024-09-06 09:13:41,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 6660096. Throughput: 0: 955.2. Samples: 413862. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2445
+ [2024-09-06 09:13:41,358][01070] Avg episode reward: [(0, '26.013')]
2446
+ [2024-09-06 09:13:46,359][01070] Fps is (10 sec: 2866.1, 60 sec: 3549.8, 300 sec: 3846.0). Total num frames: 6668288. Throughput: 0: 912.7. Samples: 415480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2447
+ [2024-09-06 09:13:46,363][01070] Avg episode reward: [(0, '26.088')]
2448
+ [2024-09-06 09:13:51,356][01070] Fps is (10 sec: 2048.0, 60 sec: 3549.9, 300 sec: 3804.4). Total num frames: 6680576. Throughput: 0: 838.4. Samples: 418684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2449
+ [2024-09-06 09:13:51,364][01070] Avg episode reward: [(0, '26.938')]
2450
+ [2024-09-06 09:13:51,474][26918] Updated weights for policy 0, policy_version 1632 (0.0031)
2451
+ [2024-09-06 09:13:56,356][01070] Fps is (10 sec: 3687.8, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 6705152. Throughput: 0: 892.4. Samples: 424946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2452
+ [2024-09-06 09:13:56,361][01070] Avg episode reward: [(0, '24.673')]
2453
+ [2024-09-06 09:14:00,965][26918] Updated weights for policy 0, policy_version 1642 (0.0021)
2454
+ [2024-09-06 09:14:01,356][01070] Fps is (10 sec: 4505.7, 60 sec: 3549.9, 300 sec: 3860.0). Total num frames: 6725632. Throughput: 0: 902.8. Samples: 428348. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2455
+ [2024-09-06 09:14:01,359][01070] Avg episode reward: [(0, '26.432')]
2456
+ [2024-09-06 09:14:01,378][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001642_6725632.pth...
2457
+ [2024-09-06 09:14:01,547][26905] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001416_5799936.pth
2458
+ [2024-09-06 09:14:06,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3818.3). Total num frames: 6737920. Throughput: 0: 856.3. Samples: 432966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2459
+ [2024-09-06 09:14:06,358][01070] Avg episode reward: [(0, '26.729')]
2460
+ [2024-09-06 09:14:11,356][01070] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3804.5). Total num frames: 6758400. Throughput: 0: 871.9. Samples: 438622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2461
+ [2024-09-06 09:14:11,360][01070] Avg episode reward: [(0, '26.781')]
2462
+ [2024-09-06 09:14:12,518][26918] Updated weights for policy 0, policy_version 1652 (0.0031)
2463
+ [2024-09-06 09:14:16,356][01070] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3846.1). Total num frames: 6782976. Throughput: 0: 901.9. Samples: 442050. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2464
+ [2024-09-06 09:14:16,361][01070] Avg episode reward: [(0, '24.850')]
2465
+ [2024-09-06 09:14:21,356][01070] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3832.2). Total num frames: 6799360. Throughput: 0: 893.2. Samples: 448282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2466
+ [2024-09-06 09:14:21,363][01070] Avg episode reward: [(0, '24.715')]
2467
+ [2024-09-06 09:14:23,497][26918] Updated weights for policy 0, policy_version 1662 (0.0015)
2468
+ [2024-09-06 09:14:26,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3804.4). Total num frames: 6815744. Throughput: 0: 869.5. Samples: 452988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2469
+ [2024-09-06 09:14:26,363][01070] Avg episode reward: [(0, '24.597')]
2470
+ [2024-09-06 09:14:31,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 6840320. Throughput: 0: 910.8. Samples: 456464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2471
+ [2024-09-06 09:14:31,362][01070] Avg episode reward: [(0, '25.878')]
2472
+ [2024-09-06 09:14:32,781][26918] Updated weights for policy 0, policy_version 1672 (0.0018)
2473
+ [2024-09-06 09:14:36,356][01070] Fps is (10 sec: 4505.4, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 6860800. Throughput: 0: 995.2. Samples: 463466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2474
+ [2024-09-06 09:14:36,359][01070] Avg episode reward: [(0, '26.414')]
2475
+ [2024-09-06 09:14:41,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3804.4). Total num frames: 6873088. Throughput: 0: 954.0. Samples: 467874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2476
+ [2024-09-06 09:14:41,360][01070] Avg episode reward: [(0, '24.276')]
2477
+ [2024-09-06 09:14:44,286][26918] Updated weights for policy 0, policy_version 1682 (0.0023)
2478
+ [2024-09-06 09:14:46,356][01070] Fps is (10 sec: 3686.5, 60 sec: 3823.2, 300 sec: 3818.3). Total num frames: 6897664. Throughput: 0: 940.6. Samples: 470674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2479
+ [2024-09-06 09:14:46,358][01070] Avg episode reward: [(0, '24.187')]
2480
+ [2024-09-06 09:14:51,356][01070] Fps is (10 sec: 4915.2, 60 sec: 4027.7, 300 sec: 3846.1). Total num frames: 6922240. Throughput: 0: 997.2. Samples: 477840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2481
+ [2024-09-06 09:14:51,358][01070] Avg episode reward: [(0, '24.909')]
2482
+ [2024-09-06 09:14:53,391][26918] Updated weights for policy 0, policy_version 1692 (0.0019)
2483
+ [2024-09-06 09:14:56,356][01070] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6938624. Throughput: 0: 992.7. Samples: 483292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2484
+ [2024-09-06 09:14:56,363][01070] Avg episode reward: [(0, '25.437')]
2485
+ [2024-09-06 09:15:01,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 6955008. Throughput: 0: 962.7. Samples: 485372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2486
+ [2024-09-06 09:15:01,362][01070] Avg episode reward: [(0, '24.558')]
2487
+ [2024-09-06 09:15:04,835][26918] Updated weights for policy 0, policy_version 1702 (0.0034)
2488
+ [2024-09-06 09:15:06,356][01070] Fps is (10 sec: 3686.5, 60 sec: 3959.4, 300 sec: 3860.0). Total num frames: 6975488. Throughput: 0: 964.8. Samples: 491700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2489
+ [2024-09-06 09:15:06,359][01070] Avg episode reward: [(0, '25.932')]
2490
+ [2024-09-06 09:15:11,359][01070] Fps is (10 sec: 4094.7, 60 sec: 3959.3, 300 sec: 3873.8). Total num frames: 6995968. Throughput: 0: 999.5. Samples: 497968. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2491
+ [2024-09-06 09:15:11,361][01070] Avg episode reward: [(0, '26.598')]
2492
+ [2024-09-06 09:15:16,356][01070] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 7008256. Throughput: 0: 966.7. Samples: 499964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2493
+ [2024-09-06 09:15:16,360][01070] Avg episode reward: [(0, '26.669')]
2494
+ [2024-09-06 09:15:16,629][26918] Updated weights for policy 0, policy_version 1712 (0.0029)
2495
+ [2024-09-06 09:15:21,356][01070] Fps is (10 sec: 3687.5, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 7032832. Throughput: 0: 934.1. Samples: 505502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2496
+ [2024-09-06 09:15:21,361][01070] Avg episode reward: [(0, '28.340')]
2497
+ [2024-09-06 09:15:21,369][26905] Saving new best policy, reward=28.340!
2498
+ [2024-09-06 09:15:25,736][26918] Updated weights for policy 0, policy_version 1722 (0.0017)
2499
+ [2024-09-06 09:15:26,356][01070] Fps is (10 sec: 4505.5, 60 sec: 3959.4, 300 sec: 3860.0). Total num frames: 7053312. Throughput: 0: 988.7. Samples: 512366. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2500
+ [2024-09-06 09:15:26,361][01070] Avg episode reward: [(0, '26.164')]
2501
+ [2024-09-06 09:15:31,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7069696. Throughput: 0: 981.7. Samples: 514852. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2502
+ [2024-09-06 09:15:31,358][01070] Avg episode reward: [(0, '25.977')]
2503
+ [2024-09-06 09:15:36,356][01070] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 7086080. Throughput: 0: 917.2. Samples: 519116. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2504
+ [2024-09-06 09:15:36,358][01070] Avg episode reward: [(0, '24.462')]
2505
+ [2024-09-06 09:15:37,677][26918] Updated weights for policy 0, policy_version 1732 (0.0020)
2506
+ [2024-09-06 09:15:41,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 7110656. Throughput: 0: 950.7. Samples: 526072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2507
+ [2024-09-06 09:15:41,363][01070] Avg episode reward: [(0, '26.157')]
2508
+ [2024-09-06 09:15:46,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7127040. Throughput: 0: 981.0. Samples: 529516. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2509
+ [2024-09-06 09:15:46,359][01070] Avg episode reward: [(0, '26.246')]
2510
+ [2024-09-06 09:15:48,160][26918] Updated weights for policy 0, policy_version 1742 (0.0027)
2511
+ [2024-09-06 09:15:51,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 7143424. Throughput: 0: 937.5. Samples: 533888. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2512
+ [2024-09-06 09:15:51,361][01070] Avg episode reward: [(0, '26.209')]
2513
+ [2024-09-06 09:15:56,355][01070] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 7168000. Throughput: 0: 938.6. Samples: 540204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2514
+ [2024-09-06 09:15:56,361][01070] Avg episode reward: [(0, '25.364')]
2515
+ [2024-09-06 09:15:58,022][26918] Updated weights for policy 0, policy_version 1752 (0.0021)
2516
+ [2024-09-06 09:16:01,362][01070] Fps is (10 sec: 4502.7, 60 sec: 3890.8, 300 sec: 3846.0). Total num frames: 7188480. Throughput: 0: 974.7. Samples: 543832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2517
+ [2024-09-06 09:16:01,368][01070] Avg episode reward: [(0, '25.809')]
2518
+ [2024-09-06 09:16:01,384][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001755_7188480.pth...
2519
+ [2024-09-06 09:16:01,576][26905] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001530_6266880.pth
2520
+ [2024-09-06 09:16:06,356][01070] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7204864. Throughput: 0: 972.9. Samples: 549282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2521
+ [2024-09-06 09:16:06,361][01070] Avg episode reward: [(0, '26.285')]
2522
+ [2024-09-06 09:16:09,912][26918] Updated weights for policy 0, policy_version 1762 (0.0018)
2523
+ [2024-09-06 09:16:11,356][01070] Fps is (10 sec: 3278.9, 60 sec: 3754.9, 300 sec: 3818.3). Total num frames: 7221248. Throughput: 0: 929.5. Samples: 554194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2524
+ [2024-09-06 09:16:11,362][01070] Avg episode reward: [(0, '25.498')]
2525
+ [2024-09-06 09:16:16,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 7245824. Throughput: 0: 949.5. Samples: 557580. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2526
+ [2024-09-06 09:16:16,358][01070] Avg episode reward: [(0, '25.721')]
2527
+ [2024-09-06 09:16:19,014][26918] Updated weights for policy 0, policy_version 1772 (0.0031)
2528
+ [2024-09-06 09:16:21,356][01070] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7262208. Throughput: 0: 997.7. Samples: 564012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2529
+ [2024-09-06 09:16:21,363][01070] Avg episode reward: [(0, '26.446')]
2530
+ [2024-09-06 09:16:26,356][01070] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 7274496. Throughput: 0: 934.9. Samples: 568142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2531
+ [2024-09-06 09:16:26,361][01070] Avg episode reward: [(0, '26.027')]
2532
+ [2024-09-06 09:16:30,972][26918] Updated weights for policy 0, policy_version 1782 (0.0023)
2533
+ [2024-09-06 09:16:31,357][01070] Fps is (10 sec: 3686.2, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 7299072. Throughput: 0: 925.0. Samples: 571140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2534
+ [2024-09-06 09:16:31,361][01070] Avg episode reward: [(0, '27.410')]
2535
+ [2024-09-06 09:16:36,357][01070] Fps is (10 sec: 4914.6, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 7323648. Throughput: 0: 976.6. Samples: 577838. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2536
+ [2024-09-06 09:16:36,365][01070] Avg episode reward: [(0, '26.279')]
2537
+ [2024-09-06 09:16:41,356][01070] Fps is (10 sec: 3686.8, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 7335936. Throughput: 0: 945.0. Samples: 582728. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2538
+ [2024-09-06 09:16:41,361][01070] Avg episode reward: [(0, '26.079')]
2539
+ [2024-09-06 09:16:42,668][26918] Updated weights for policy 0, policy_version 1792 (0.0023)
2540
+ [2024-09-06 09:16:46,356][01070] Fps is (10 sec: 2867.6, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 7352320. Throughput: 0: 909.9. Samples: 584772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2541
+ [2024-09-06 09:16:46,362][01070] Avg episode reward: [(0, '24.573')]
2542
+ [2024-09-06 09:16:51,356][01070] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 7376896. Throughput: 0: 937.6. Samples: 591472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2543
+ [2024-09-06 09:16:51,358][01070] Avg episode reward: [(0, '24.745')]
2544
+ [2024-09-06 09:16:51,968][26918] Updated weights for policy 0, policy_version 1802 (0.0024)
2545
+ [2024-09-06 09:16:56,356][01070] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 7393280. Throughput: 0: 963.2. Samples: 597536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2546
+ [2024-09-06 09:16:56,363][01070] Avg episode reward: [(0, '24.638')]
2547
+ [2024-09-06 09:17:01,355][01070] Fps is (10 sec: 3276.9, 60 sec: 3686.8, 300 sec: 3790.5). Total num frames: 7409664. Throughput: 0: 931.6. Samples: 599500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2548
+ [2024-09-06 09:17:01,361][01070] Avg episode reward: [(0, '23.608')]
2549
+ [2024-09-06 09:17:04,174][26918] Updated weights for policy 0, policy_version 1812 (0.0038)
2550
+ [2024-09-06 09:17:06,356][01070] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 7430144. Throughput: 0: 909.0. Samples: 604916. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2551
+ [2024-09-06 09:17:06,361][01070] Avg episode reward: [(0, '24.117')]
2552
+ [2024-09-06 09:17:11,356][01070] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 7450624. Throughput: 0: 967.2. Samples: 611664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2553
+ [2024-09-06 09:17:11,360][01070] Avg episode reward: [(0, '25.639')]
2554
+ [2024-09-06 09:17:15,155][26918] Updated weights for policy 0, policy_version 1822 (0.0022)
2555
+ [2024-09-06 09:17:16,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3776.7). Total num frames: 7462912. Throughput: 0: 950.2. Samples: 613900. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2556
+ [2024-09-06 09:17:16,360][01070] Avg episode reward: [(0, '24.625')]
2557
+ [2024-09-06 09:17:21,356][01070] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3762.8). Total num frames: 7483392. Throughput: 0: 902.2. Samples: 618438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2558
+ [2024-09-06 09:17:21,363][01070] Avg episode reward: [(0, '23.896')]
2559
+ [2024-09-06 09:17:26,357][01070] Fps is (10 sec: 3685.9, 60 sec: 3754.6, 300 sec: 3762.7). Total num frames: 7499776. Throughput: 0: 928.5. Samples: 624510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2560
+ [2024-09-06 09:17:26,362][01070] Avg episode reward: [(0, '23.156')]
2561
+ [2024-09-06 09:17:26,513][26918] Updated weights for policy 0, policy_version 1832 (0.0033)
2562
+ [2024-09-06 09:17:28,093][26905] Stopping Batcher_0...
2563
+ [2024-09-06 09:17:28,095][26905] Loop batcher_evt_loop terminating...
2564
+ [2024-09-06 09:17:28,094][01070] Component Batcher_0 stopped!
2565
+ [2024-09-06 09:17:28,112][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001833_7507968.pth...
2566
+ [2024-09-06 09:17:28,214][26918] Weights refcount: 2 0
2567
+ [2024-09-06 09:17:28,227][01070] Component InferenceWorker_p0-w0 stopped!
2568
+ [2024-09-06 09:17:28,231][26918] Stopping InferenceWorker_p0-w0...
2569
+ [2024-09-06 09:17:28,232][26918] Loop inference_proc0-0_evt_loop terminating...
2570
+ [2024-09-06 09:17:28,257][26905] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001642_6725632.pth
2571
+ [2024-09-06 09:17:28,275][26905] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001833_7507968.pth...
2572
+ [2024-09-06 09:17:28,542][01070] Component LearnerWorker_p0 stopped!
2573
+ [2024-09-06 09:17:28,549][26905] Stopping LearnerWorker_p0...
2574
+ [2024-09-06 09:17:28,550][26905] Loop learner_proc0_evt_loop terminating...
2575
+ [2024-09-06 09:17:28,936][26924] Stopping RolloutWorker_w5...
2576
+ [2024-09-06 09:17:28,931][01070] Component RolloutWorker_w5 stopped!
2577
+ [2024-09-06 09:17:28,944][01070] Component RolloutWorker_w2 stopped!
2578
+ [2024-09-06 09:17:28,946][26921] Stopping RolloutWorker_w2...
2579
+ [2024-09-06 09:17:28,954][01070] Component RolloutWorker_w0 stopped!
2580
+ [2024-09-06 09:17:28,958][26919] Stopping RolloutWorker_w0...
2581
+ [2024-09-06 09:17:28,958][26919] Loop rollout_proc0_evt_loop terminating...
2582
+ [2024-09-06 09:17:28,963][26922] Stopping RolloutWorker_w3...
2583
+ [2024-09-06 09:17:28,963][26922] Loop rollout_proc3_evt_loop terminating...
2584
+ [2024-09-06 09:17:28,963][01070] Component RolloutWorker_w3 stopped!
2585
+ [2024-09-06 09:17:28,947][26921] Loop rollout_proc2_evt_loop terminating...
2586
+ [2024-09-06 09:17:28,976][26924] Loop rollout_proc5_evt_loop terminating...
2587
+ [2024-09-06 09:17:28,980][01070] Component RolloutWorker_w4 stopped!
2588
+ [2024-09-06 09:17:28,982][26923] Stopping RolloutWorker_w4...
2589
+ [2024-09-06 09:17:28,983][26923] Loop rollout_proc4_evt_loop terminating...
2590
+ [2024-09-06 09:17:28,999][26920] Stopping RolloutWorker_w1...
2591
+ [2024-09-06 09:17:29,000][26920] Loop rollout_proc1_evt_loop terminating...
2592
+ [2024-09-06 09:17:28,999][01070] Component RolloutWorker_w1 stopped!
2593
+ [2024-09-06 09:17:29,023][01070] Component RolloutWorker_w6 stopped!
2594
+ [2024-09-06 09:17:29,025][26925] Stopping RolloutWorker_w6...
2595
+ [2024-09-06 09:17:29,026][26925] Loop rollout_proc6_evt_loop terminating...
2596
+ [2024-09-06 09:17:29,026][26926] Stopping RolloutWorker_w7...
2597
+ [2024-09-06 09:17:29,034][26926] Loop rollout_proc7_evt_loop terminating...
2598
+ [2024-09-06 09:17:29,035][01070] Component RolloutWorker_w7 stopped!
2599
+ [2024-09-06 09:17:29,037][01070] Waiting for process learner_proc0 to stop...
2600
+ [2024-09-06 09:17:31,285][01070] Waiting for process inference_proc0-0 to join...
2601
+ [2024-09-06 09:17:31,293][01070] Waiting for process rollout_proc0 to join...
2602
+ [2024-09-06 09:17:34,584][01070] Waiting for process rollout_proc1 to join...
2603
+ [2024-09-06 09:17:34,588][01070] Waiting for process rollout_proc2 to join...
2604
+ [2024-09-06 09:17:34,592][01070] Waiting for process rollout_proc3 to join...
2605
+ [2024-09-06 09:17:34,598][01070] Waiting for process rollout_proc4 to join...
2606
+ [2024-09-06 09:17:34,600][01070] Waiting for process rollout_proc5 to join...
2607
+ [2024-09-06 09:17:34,604][01070] Waiting for process rollout_proc6 to join...
2608
+ [2024-09-06 09:17:34,609][01070] Waiting for process rollout_proc7 to join...
2609
+ [2024-09-06 09:17:34,612][01070] Batcher 0 profile tree view:
2610
+ batching: 17.9925, releasing_batches: 0.0162
2611
+ [2024-09-06 09:17:34,615][01070] InferenceWorker_p0-w0 profile tree view:
2612
+ wait_policy: 0.0010
2613
+ wait_policy_total: 248.3928
2614
+ update_model: 5.8404
2615
+ weight_update: 0.0033
2616
+ one_step: 0.0117
2617
+ handle_policy_step: 381.6419
2618
+ deserialize: 9.7064, stack: 1.8896, obs_to_device_normalize: 77.3906, forward: 201.8424, send_messages: 18.9870
2619
+ prepare_outputs: 52.8040
2620
+ to_cpu: 30.3389
2621
+ [2024-09-06 09:17:34,617][01070] Learner 0 profile tree view:
2622
+ misc: 0.0038, prepare_batch: 8.6835
2623
+ train: 47.1106
2624
+ epoch_init: 0.0036, minibatch_init: 0.0064, losses_postprocess: 0.4114, kl_divergence: 0.4119, after_optimizer: 2.1648
2625
+ calculate_losses: 16.5354
2626
+ losses_init: 0.0030, forward_head: 1.0620, bptt_initial: 10.6952, tail: 0.7308, advantages_returns: 0.2059, losses: 2.4252
2627
+ bptt: 1.2532
2628
+ bptt_forward_core: 1.1527
2629
+ update: 27.2086
2630
+ clip: 0.5411
2631
+ [2024-09-06 09:17:34,618][01070] RolloutWorker_w0 profile tree view:
2632
+ wait_for_trajectories: 0.1890, enqueue_policy_requests: 59.3218, env_step: 514.9058, overhead: 8.3718, complete_rollouts: 4.3760
2633
+ save_policy_outputs: 13.4348
2634
+ split_output_tensors: 5.2857
2635
+ [2024-09-06 09:17:34,623][01070] RolloutWorker_w7 profile tree view:
2636
+ wait_for_trajectories: 0.2244, enqueue_policy_requests: 61.7752, env_step: 509.9034, overhead: 8.3385, complete_rollouts: 4.7589
2637
+ save_policy_outputs: 12.9599
2638
+ split_output_tensors: 5.3185
2639
+ [2024-09-06 09:17:34,624][01070] Loop Runner_EvtLoop terminating...
2640
+ [2024-09-06 09:17:34,626][01070] Runner profile tree view:
2641
+ main_loop: 690.9829
2642
+ [2024-09-06 09:17:34,629][01070] Collected {0: 7507968}, FPS: 3621.9
2643
+ [2024-09-06 09:17:45,779][01070] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2644
+ [2024-09-06 09:17:45,780][01070] Overriding arg 'num_workers' with value 1 passed from command line
2645
+ [2024-09-06 09:17:45,781][01070] Adding new argument 'no_render'=True that is not in the saved config file!
2646
+ [2024-09-06 09:17:45,782][01070] Adding new argument 'save_video'=True that is not in the saved config file!
2647
+ [2024-09-06 09:17:45,784][01070] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2648
+ [2024-09-06 09:17:45,785][01070] Adding new argument 'video_name'=None that is not in the saved config file!
2649
+ [2024-09-06 09:17:45,786][01070] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
2650
+ [2024-09-06 09:17:45,787][01070] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2651
+ [2024-09-06 09:17:45,788][01070] Adding new argument 'push_to_hub'=False that is not in the saved config file!
2652
+ [2024-09-06 09:17:45,789][01070] Adding new argument 'hf_repository'=None that is not in the saved config file!
2653
+ [2024-09-06 09:17:45,790][01070] Adding new argument 'policy_index'=0 that is not in the saved config file!
2654
+ [2024-09-06 09:17:45,791][01070] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2655
+ [2024-09-06 09:17:45,793][01070] Adding new argument 'train_script'=None that is not in the saved config file!
2656
+ [2024-09-06 09:17:45,794][01070] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2657
+ [2024-09-06 09:17:45,795][01070] Using frameskip 1 and render_action_repeat=4 for evaluation
2658
+ [2024-09-06 09:17:45,830][01070] RunningMeanStd input shape: (3, 72, 128)
2659
+ [2024-09-06 09:17:45,831][01070] RunningMeanStd input shape: (1,)
2660
+ [2024-09-06 09:17:45,845][01070] ConvEncoder: input_channels=3
2661
+ [2024-09-06 09:17:45,883][01070] Conv encoder output size: 512
2662
+ [2024-09-06 09:17:45,885][01070] Policy head output size: 512
2663
+ [2024-09-06 09:17:45,905][01070] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001833_7507968.pth...
2664
+ [2024-09-06 09:17:46,321][01070] Num frames 100...
2665
+ [2024-09-06 09:17:46,441][01070] Num frames 200...
2666
+ [2024-09-06 09:17:46,582][01070] Num frames 300...
2667
+ [2024-09-06 09:17:46,701][01070] Num frames 400...
2668
+ [2024-09-06 09:17:46,820][01070] Num frames 500...
2669
+ [2024-09-06 09:17:46,938][01070] Num frames 600...
2670
+ [2024-09-06 09:17:47,062][01070] Num frames 700...
2671
+ [2024-09-06 09:17:47,185][01070] Num frames 800...
2672
+ [2024-09-06 09:17:47,317][01070] Num frames 900...
2673
+ [2024-09-06 09:17:47,445][01070] Avg episode rewards: #0: 19.600, true rewards: #0: 9.600
2674
+ [2024-09-06 09:17:47,447][01070] Avg episode reward: 19.600, avg true_objective: 9.600
2675
+ [2024-09-06 09:17:47,505][01070] Num frames 1000...
2676
+ [2024-09-06 09:17:47,628][01070] Num frames 1100...
2677
+ [2024-09-06 09:17:47,749][01070] Num frames 1200...
2678
+ [2024-09-06 09:17:47,870][01070] Num frames 1300...
2679
+ [2024-09-06 09:17:47,994][01070] Num frames 1400...
2680
+ [2024-09-06 09:17:48,115][01070] Num frames 1500...
2681
+ [2024-09-06 09:17:48,237][01070] Num frames 1600...
2682
+ [2024-09-06 09:17:48,367][01070] Num frames 1700...
2683
+ [2024-09-06 09:17:48,500][01070] Num frames 1800...
2684
+ [2024-09-06 09:17:48,628][01070] Num frames 1900...
2685
+ [2024-09-06 09:17:48,753][01070] Num frames 2000...
2686
+ [2024-09-06 09:17:48,873][01070] Num frames 2100...
2687
+ [2024-09-06 09:17:48,996][01070] Num frames 2200...
2688
+ [2024-09-06 09:17:49,120][01070] Num frames 2300...
2689
+ [2024-09-06 09:17:49,242][01070] Num frames 2400...
2690
+ [2024-09-06 09:17:49,375][01070] Num frames 2500...
2691
+ [2024-09-06 09:17:49,505][01070] Num frames 2600...
2692
+ [2024-09-06 09:17:49,630][01070] Num frames 2700...
2693
+ [2024-09-06 09:17:49,753][01070] Num frames 2800...
2694
+ [2024-09-06 09:17:49,829][01070] Avg episode rewards: #0: 33.080, true rewards: #0: 14.080
2695
+ [2024-09-06 09:17:49,832][01070] Avg episode reward: 33.080, avg true_objective: 14.080
2696
+ [2024-09-06 09:17:49,934][01070] Num frames 2900...
2697
+ [2024-09-06 09:17:50,057][01070] Num frames 3000...
2698
+ [2024-09-06 09:17:50,185][01070] Num frames 3100...
2699
+ [2024-09-06 09:17:50,309][01070] Num frames 3200...
2700
+ [2024-09-06 09:17:50,439][01070] Num frames 3300...
2701
+ [2024-09-06 09:17:50,572][01070] Num frames 3400...
2702
+ [2024-09-06 09:17:50,696][01070] Num frames 3500...
2703
+ [2024-09-06 09:17:50,818][01070] Num frames 3600...
2704
+ [2024-09-06 09:17:50,943][01070] Num frames 3700...
2705
+ [2024-09-06 09:17:51,066][01070] Num frames 3800...
2706
+ [2024-09-06 09:17:51,193][01070] Num frames 3900...
2707
+ [2024-09-06 09:17:51,319][01070] Num frames 4000...
2708
+ [2024-09-06 09:17:51,454][01070] Num frames 4100...
2709
+ [2024-09-06 09:17:51,590][01070] Num frames 4200...
2710
+ [2024-09-06 09:17:51,712][01070] Num frames 4300...
2711
+ [2024-09-06 09:17:51,829][01070] Avg episode rewards: #0: 34.170, true rewards: #0: 14.503
2712
+ [2024-09-06 09:17:51,831][01070] Avg episode reward: 34.170, avg true_objective: 14.503
2713
+ [2024-09-06 09:17:51,892][01070] Num frames 4400...
2714
+ [2024-09-06 09:17:52,013][01070] Num frames 4500...
2715
+ [2024-09-06 09:17:52,134][01070] Num frames 4600...
2716
+ [2024-09-06 09:17:52,254][01070] Num frames 4700...
2717
+ [2024-09-06 09:17:52,392][01070] Num frames 4800...
2718
+ [2024-09-06 09:17:52,519][01070] Num frames 4900...
2719
+ [2024-09-06 09:17:52,643][01070] Num frames 5000...
2720
+ [2024-09-06 09:17:52,767][01070] Num frames 5100...
2721
+ [2024-09-06 09:17:52,887][01070] Num frames 5200...
2722
+ [2024-09-06 09:17:53,047][01070] Avg episode rewards: #0: 31.220, true rewards: #0: 13.220
2723
+ [2024-09-06 09:17:53,050][01070] Avg episode reward: 31.220, avg true_objective: 13.220
2724
+ [2024-09-06 09:17:53,067][01070] Num frames 5300...
2725
+ [2024-09-06 09:17:53,188][01070] Num frames 5400...
2726
+ [2024-09-06 09:17:53,311][01070] Num frames 5500...
2727
+ [2024-09-06 09:17:53,439][01070] Num frames 5600...
2728
+ [2024-09-06 09:17:53,569][01070] Num frames 5700...
2729
+ [2024-09-06 09:17:53,688][01070] Num frames 5800...
2730
+ [2024-09-06 09:17:53,820][01070] Num frames 5900...
2731
+ [2024-09-06 09:17:53,942][01070] Num frames 6000...
2732
+ [2024-09-06 09:17:54,064][01070] Num frames 6100...
2733
+ [2024-09-06 09:17:54,186][01070] Num frames 6200...
2734
+ [2024-09-06 09:17:54,305][01070] Num frames 6300...
2735
+ [2024-09-06 09:17:54,429][01070] Num frames 6400...
2736
+ [2024-09-06 09:17:54,564][01070] Num frames 6500...
2737
+ [2024-09-06 09:17:54,684][01070] Num frames 6600...
2738
+ [2024-09-06 09:17:54,803][01070] Num frames 6700...
2739
+ [2024-09-06 09:17:54,921][01070] Num frames 6800...
2740
+ [2024-09-06 09:17:55,042][01070] Num frames 6900...
2741
+ [2024-09-06 09:17:55,161][01070] Num frames 7000...
2742
+ [2024-09-06 09:17:55,283][01070] Num frames 7100...
2743
+ [2024-09-06 09:17:55,406][01070] Num frames 7200...
2744
+ [2024-09-06 09:17:55,568][01070] Avg episode rewards: #0: 36.544, true rewards: #0: 14.544
2745
+ [2024-09-06 09:17:55,570][01070] Avg episode reward: 36.544, avg true_objective: 14.544
2746
+ [2024-09-06 09:17:55,620][01070] Num frames 7300...
2747
+ [2024-09-06 09:17:55,785][01070] Num frames 7400...
2748
+ [2024-09-06 09:17:55,948][01070] Num frames 7500...
2749
+ [2024-09-06 09:17:56,111][01070] Num frames 7600...
2750
+ [2024-09-06 09:17:56,313][01070] Avg episode rewards: #0: 31.980, true rewards: #0: 12.813
2751
+ [2024-09-06 09:17:56,316][01070] Avg episode reward: 31.980, avg true_objective: 12.813
2752
+ [2024-09-06 09:17:56,341][01070] Num frames 7700...
2753
+ [2024-09-06 09:17:56,516][01070] Num frames 7800...
2754
+ [2024-09-06 09:17:56,682][01070] Num frames 7900...
2755
+ [2024-09-06 09:17:56,852][01070] Num frames 8000...
2756
+ [2024-09-06 09:17:57,025][01070] Num frames 8100...
2757
+ [2024-09-06 09:17:57,187][01070] Num frames 8200...
2758
+ [2024-09-06 09:17:57,363][01070] Num frames 8300...
2759
+ [2024-09-06 09:17:57,543][01070] Num frames 8400...
2760
+ [2024-09-06 09:17:57,725][01070] Num frames 8500...
2761
+ [2024-09-06 09:17:57,897][01070] Num frames 8600...
2762
+ [2024-09-06 09:17:58,074][01070] Num frames 8700...
2763
+ [2024-09-06 09:17:58,236][01070] Avg episode rewards: #0: 30.823, true rewards: #0: 12.537
2764
+ [2024-09-06 09:17:58,239][01070] Avg episode reward: 30.823, avg true_objective: 12.537
2765
+ [2024-09-06 09:17:58,269][01070] Num frames 8800...
2766
+ [2024-09-06 09:17:58,395][01070] Num frames 8900...
2767
+ [2024-09-06 09:17:58,527][01070] Num frames 9000...
2768
+ [2024-09-06 09:17:58,658][01070] Num frames 9100...
2769
+ [2024-09-06 09:17:58,782][01070] Num frames 9200...
2770
+ [2024-09-06 09:17:58,903][01070] Num frames 9300...
2771
+ [2024-09-06 09:17:59,024][01070] Num frames 9400...
2772
+ [2024-09-06 09:17:59,146][01070] Num frames 9500...
2773
+ [2024-09-06 09:17:59,272][01070] Num frames 9600...
2774
+ [2024-09-06 09:17:59,395][01070] Num frames 9700...
2775
+ [2024-09-06 09:17:59,524][01070] Num frames 9800...
2776
+ [2024-09-06 09:17:59,647][01070] Num frames 9900...
2777
+ [2024-09-06 09:17:59,745][01070] Avg episode rewards: #0: 30.285, true rewards: #0: 12.410
2778
+ [2024-09-06 09:17:59,746][01070] Avg episode reward: 30.285, avg true_objective: 12.410
2779
+ [2024-09-06 09:17:59,839][01070] Num frames 10000...
2780
+ [2024-09-06 09:17:59,962][01070] Num frames 10100...
2781
+ [2024-09-06 09:18:00,084][01070] Num frames 10200...
2782
+ [2024-09-06 09:18:00,206][01070] Num frames 10300...
2783
+ [2024-09-06 09:18:00,365][01070] Avg episode rewards: #0: 27.871, true rewards: #0: 11.538
2784
+ [2024-09-06 09:18:00,366][01070] Avg episode reward: 27.871, avg true_objective: 11.538
2785
+ [2024-09-06 09:18:00,389][01070] Num frames 10400...
2786
+ [2024-09-06 09:18:00,522][01070] Num frames 10500...
2787
+ [2024-09-06 09:18:00,660][01070] Num frames 10600...
2788
+ [2024-09-06 09:18:00,798][01070] Num frames 10700...
2789
+ [2024-09-06 09:18:00,918][01070] Num frames 10800...
2790
+ [2024-09-06 09:18:01,038][01070] Num frames 10900...
2791
+ [2024-09-06 09:18:01,163][01070] Num frames 11000...
2792
+ [2024-09-06 09:18:01,284][01070] Num frames 11100...
2793
+ [2024-09-06 09:18:01,405][01070] Num frames 11200...
2794
+ [2024-09-06 09:18:01,533][01070] Num frames 11300...
2795
+ [2024-09-06 09:18:01,658][01070] Num frames 11400...
2796
+ [2024-09-06 09:18:01,790][01070] Num frames 11500...
2797
+ [2024-09-06 09:18:01,915][01070] Num frames 11600...
2798
+ [2024-09-06 09:18:02,004][01070] Avg episode rewards: #0: 27.925, true rewards: #0: 11.625
2799
+ [2024-09-06 09:18:02,006][01070] Avg episode reward: 27.925, avg true_objective: 11.625
2800
+ [2024-09-06 09:19:12,012][01070] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
2801
+ [2024-09-06 09:19:36,347][01070] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2802
+ [2024-09-06 09:19:36,349][01070] Overriding arg 'num_workers' with value 1 passed from command line
2803
+ [2024-09-06 09:19:36,350][01070] Adding new argument 'no_render'=True that is not in the saved config file!
2804
+ [2024-09-06 09:19:36,351][01070] Adding new argument 'save_video'=True that is not in the saved config file!
2805
+ [2024-09-06 09:19:36,355][01070] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2806
+ [2024-09-06 09:19:36,356][01070] Adding new argument 'video_name'=None that is not in the saved config file!
2807
+ [2024-09-06 09:19:36,359][01070] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
2808
+ [2024-09-06 09:19:36,359][01070] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2809
+ [2024-09-06 09:19:36,361][01070] Adding new argument 'push_to_hub'=True that is not in the saved config file!
2810
+ [2024-09-06 09:19:36,362][01070] Adding new argument 'hf_repository'='Re-Re/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
2811
+ [2024-09-06 09:19:36,364][01070] Adding new argument 'policy_index'=0 that is not in the saved config file!
2812
+ [2024-09-06 09:19:36,365][01070] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2813
+ [2024-09-06 09:19:36,367][01070] Adding new argument 'train_script'=None that is not in the saved config file!
2814
+ [2024-09-06 09:19:36,368][01070] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2815
+ [2024-09-06 09:19:36,369][01070] Using frameskip 1 and render_action_repeat=4 for evaluation
2816
+ [2024-09-06 09:19:36,399][01070] RunningMeanStd input shape: (3, 72, 128)
2817
+ [2024-09-06 09:19:36,400][01070] RunningMeanStd input shape: (1,)
2818
+ [2024-09-06 09:19:36,414][01070] ConvEncoder: input_channels=3
2819
+ [2024-09-06 09:19:36,451][01070] Conv encoder output size: 512
2820
+ [2024-09-06 09:19:36,452][01070] Policy head output size: 512
2821
+ [2024-09-06 09:19:36,477][01070] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001833_7507968.pth...
2822
+ [2024-09-06 09:19:36,955][01070] Num frames 100...
2823
+ [2024-09-06 09:19:37,079][01070] Num frames 200...
2824
+ [2024-09-06 09:19:37,209][01070] Num frames 300...
2825
+ [2024-09-06 09:19:37,385][01070] Num frames 400...
2826
+ [2024-09-06 09:19:37,559][01070] Num frames 500...
2827
+ [2024-09-06 09:19:37,729][01070] Num frames 600...
2828
+ [2024-09-06 09:19:37,920][01070] Num frames 700...
2829
+ [2024-09-06 09:19:38,094][01070] Num frames 800...
2830
+ [2024-09-06 09:19:38,257][01070] Num frames 900...
2831
+ [2024-09-06 09:19:38,425][01070] Num frames 1000...
2832
+ [2024-09-06 09:19:38,607][01070] Num frames 1100...
2833
+ [2024-09-06 09:19:38,779][01070] Num frames 1200...
2834
+ [2024-09-06 09:19:38,970][01070] Num frames 1300...
2835
+ [2024-09-06 09:19:39,151][01070] Num frames 1400...
2836
+ [2024-09-06 09:19:39,333][01070] Num frames 1500...
2837
+ [2024-09-06 09:19:39,514][01070] Num frames 1600...
2838
+ [2024-09-06 09:19:39,697][01070] Num frames 1700...
2839
+ [2024-09-06 09:19:39,863][01070] Num frames 1800...
2840
+ [2024-09-06 09:19:39,995][01070] Num frames 1900...
2841
+ [2024-09-06 09:19:40,121][01070] Num frames 2000...
2842
+ [2024-09-06 09:19:40,238][01070] Avg episode rewards: #0: 51.479, true rewards: #0: 20.480
2843
+ [2024-09-06 09:19:40,240][01070] Avg episode reward: 51.479, avg true_objective: 20.480
2844
+ [2024-09-06 09:19:40,307][01070] Num frames 2100...
2845
+ [2024-09-06 09:19:40,432][01070] Num frames 2200...
2846
+ [2024-09-06 09:19:40,565][01070] Num frames 2300...
2847
+ [2024-09-06 09:19:40,689][01070] Num frames 2400...
2848
+ [2024-09-06 09:19:40,851][01070] Avg episode rewards: #0: 28.429, true rewards: #0: 12.430
2849
+ [2024-09-06 09:19:40,853][01070] Avg episode reward: 28.429, avg true_objective: 12.430
2850
+ [2024-09-06 09:19:40,874][01070] Num frames 2500...
2851
+ [2024-09-06 09:19:41,006][01070] Num frames 2600...
2852
+ [2024-09-06 09:19:41,130][01070] Num frames 2700...
2853
+ [2024-09-06 09:19:41,253][01070] Num frames 2800...
2854
+ [2024-09-06 09:19:41,420][01070] Avg episode rewards: #0: 21.636, true rewards: #0: 9.637
2855
+ [2024-09-06 09:19:41,422][01070] Avg episode reward: 21.636, avg true_objective: 9.637
2856
+ [2024-09-06 09:19:41,438][01070] Num frames 2900...
2857
+ [2024-09-06 09:19:41,568][01070] Num frames 3000...
2858
+ [2024-09-06 09:19:41,692][01070] Num frames 3100...
2859
+ [2024-09-06 09:19:41,815][01070] Num frames 3200...
2860
+ [2024-09-06 09:19:41,942][01070] Num frames 3300...
2861
+ [2024-09-06 09:19:42,072][01070] Num frames 3400...
2862
+ [2024-09-06 09:19:42,225][01070] Avg episode rewards: #0: 19.450, true rewards: #0: 8.700
2863
+ [2024-09-06 09:19:42,228][01070] Avg episode reward: 19.450, avg true_objective: 8.700
2864
+ [2024-09-06 09:19:42,256][01070] Num frames 3500...
2865
+ [2024-09-06 09:19:42,378][01070] Num frames 3600...
2866
+ [2024-09-06 09:19:42,511][01070] Num frames 3700...
2867
+ [2024-09-06 09:19:42,635][01070] Num frames 3800...
2868
+ [2024-09-06 09:19:42,758][01070] Num frames 3900...
2869
+ [2024-09-06 09:19:42,882][01070] Num frames 4000...
2870
+ [2024-09-06 09:19:43,006][01070] Num frames 4100...
2871
+ [2024-09-06 09:19:43,138][01070] Num frames 4200...
2872
+ [2024-09-06 09:19:43,263][01070] Num frames 4300...
2873
+ [2024-09-06 09:19:43,384][01070] Num frames 4400...
2874
+ [2024-09-06 09:19:43,510][01070] Num frames 4500...
2875
+ [2024-09-06 09:19:43,637][01070] Num frames 4600...
2876
+ [2024-09-06 09:19:43,807][01070] Avg episode rewards: #0: 20.992, true rewards: #0: 9.392
2877
+ [2024-09-06 09:19:43,808][01070] Avg episode reward: 20.992, avg true_objective: 9.392
2878
+ [2024-09-06 09:19:43,817][01070] Num frames 4700...
2879
+ [2024-09-06 09:19:43,941][01070] Num frames 4800...
2880
+ [2024-09-06 09:19:44,073][01070] Num frames 4900...
2881
+ [2024-09-06 09:19:44,194][01070] Num frames 5000...
2882
+ [2024-09-06 09:19:44,313][01070] Num frames 5100...
2883
+ [2024-09-06 09:19:44,458][01070] Avg episode rewards: #0: 19.460, true rewards: #0: 8.627
2884
+ [2024-09-06 09:19:44,461][01070] Avg episode reward: 19.460, avg true_objective: 8.627
2885
+ [2024-09-06 09:19:44,499][01070] Num frames 5200...
2886
+ [2024-09-06 09:19:44,619][01070] Num frames 5300...
2887
+ [2024-09-06 09:19:44,747][01070] Num frames 5400...
2888
+ [2024-09-06 09:19:44,868][01070] Num frames 5500...
2889
+ [2024-09-06 09:19:44,985][01070] Num frames 5600...
2890
+ [2024-09-06 09:19:45,160][01070] Avg episode rewards: #0: 18.424, true rewards: #0: 8.139
2891
+ [2024-09-06 09:19:45,162][01070] Avg episode reward: 18.424, avg true_objective: 8.139
2892
+ [2024-09-06 09:19:45,169][01070] Num frames 5700...
2893
+ [2024-09-06 09:19:45,291][01070] Num frames 5800...
2894
+ [2024-09-06 09:19:45,409][01070] Num frames 5900...
2895
+ [2024-09-06 09:19:45,542][01070] Num frames 6000...
2896
+ [2024-09-06 09:19:45,659][01070] Num frames 6100...
2897
+ [2024-09-06 09:19:45,778][01070] Num frames 6200...
2898
+ [2024-09-06 09:19:45,899][01070] Num frames 6300...
2899
+ [2024-09-06 09:19:46,016][01070] Num frames 6400...
2900
+ [2024-09-06 09:19:46,141][01070] Num frames 6500...
2901
+ [2024-09-06 09:19:46,260][01070] Num frames 6600...
2902
+ [2024-09-06 09:19:46,384][01070] Num frames 6700...
2903
+ [2024-09-06 09:19:46,514][01070] Num frames 6800...
2904
+ [2024-09-06 09:19:46,639][01070] Num frames 6900...
2905
+ [2024-09-06 09:19:46,763][01070] Num frames 7000...
2906
+ [2024-09-06 09:19:46,887][01070] Num frames 7100...
2907
+ [2024-09-06 09:19:47,010][01070] Num frames 7200...
2908
+ [2024-09-06 09:19:47,139][01070] Num frames 7300...
2909
+ [2024-09-06 09:19:47,260][01070] Num frames 7400...
2910
+ [2024-09-06 09:19:47,388][01070] Num frames 7500...
2911
+ [2024-09-06 09:19:47,517][01070] Num frames 7600...
2912
+ [2024-09-06 09:19:47,639][01070] Num frames 7700...
2913
+ [2024-09-06 09:19:47,813][01070] Avg episode rewards: #0: 23.371, true rewards: #0: 9.746
2914
+ [2024-09-06 09:19:47,815][01070] Avg episode reward: 23.371, avg true_objective: 9.746
2915
+ [2024-09-06 09:19:47,821][01070] Num frames 7800...
2916
+ [2024-09-06 09:19:47,942][01070] Num frames 7900...
2917
+ [2024-09-06 09:19:48,066][01070] Num frames 8000...
2918
+ [2024-09-06 09:19:48,202][01070] Num frames 8100...
2919
+ [2024-09-06 09:19:48,326][01070] Num frames 8200...
2920
+ [2024-09-06 09:19:48,450][01070] Num frames 8300...
2921
+ [2024-09-06 09:19:48,583][01070] Num frames 8400...
2922
+ [2024-09-06 09:19:48,708][01070] Num frames 8500...
2923
+ [2024-09-06 09:19:48,838][01070] Num frames 8600...
2924
+ [2024-09-06 09:19:48,963][01070] Num frames 8700...
2925
+ [2024-09-06 09:19:49,088][01070] Num frames 8800...
2926
+ [2024-09-06 09:19:49,227][01070] Num frames 8900...
2927
+ [2024-09-06 09:19:49,353][01070] Num frames 9000...
2928
+ [2024-09-06 09:19:49,485][01070] Num frames 9100...
2929
+ [2024-09-06 09:19:49,612][01070] Num frames 9200...
2930
+ [2024-09-06 09:19:49,738][01070] Num frames 9300...
2931
+ [2024-09-06 09:19:49,893][01070] Num frames 9400...
2932
+ [2024-09-06 09:19:50,071][01070] Num frames 9500...
2933
+ [2024-09-06 09:19:50,251][01070] Num frames 9600...
2934
+ [2024-09-06 09:19:50,422][01070] Num frames 9700...
2935
+ [2024-09-06 09:19:50,629][01070] Avg episode rewards: #0: 26.986, true rewards: #0: 10.876
2936
+ [2024-09-06 09:19:50,631][01070] Avg episode reward: 26.986, avg true_objective: 10.876
2937
+ [2024-09-06 09:19:50,658][01070] Num frames 9800...
2938
+ [2024-09-06 09:19:50,826][01070] Num frames 9900...
2939
+ [2024-09-06 09:19:50,989][01070] Num frames 10000...
2940
+ [2024-09-06 09:19:51,162][01070] Num frames 10100...
2941
+ [2024-09-06 09:19:51,348][01070] Num frames 10200...
2942
+ [2024-09-06 09:19:51,531][01070] Num frames 10300...
2943
+ [2024-09-06 09:19:51,711][01070] Num frames 10400...
2944
+ [2024-09-06 09:19:51,893][01070] Num frames 10500...
2945
+ [2024-09-06 09:19:52,069][01070] Num frames 10600...
2946
+ [2024-09-06 09:19:52,177][01070] Avg episode rewards: #0: 26.327, true rewards: #0: 10.627
2947
+ [2024-09-06 09:19:52,179][01070] Avg episode reward: 26.327, avg true_objective: 10.627
2948
+ [2024-09-06 09:20:57,694][01070] Replay video saved to /content/train_dir/default_experiment/replay.mp4!