[2023-07-04 15:18:01,007][00468] Saving configuration to /content/train_dir/default_experiment/cfg.json... [2023-07-04 15:18:01,012][00468] Rollout worker 0 uses device cpu [2023-07-04 15:18:01,022][00468] Rollout worker 1 uses device cpu [2023-07-04 15:18:01,023][00468] Rollout worker 2 uses device cpu [2023-07-04 15:18:01,025][00468] Rollout worker 3 uses device cpu [2023-07-04 15:18:01,030][00468] Rollout worker 4 uses device cpu [2023-07-04 15:18:01,031][00468] Rollout worker 5 uses device cpu [2023-07-04 15:18:01,036][00468] Rollout worker 6 uses device cpu [2023-07-04 15:18:01,037][00468] Rollout worker 7 uses device cpu [2023-07-04 15:18:01,231][00468] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:18:01,237][00468] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:18:01,287][00468] Starting all processes... [2023-07-04 15:18:01,289][00468] Starting process learner_proc0 [2023-07-04 15:18:01,295][00468] EvtLoop [Runner_EvtLoop, process=main process 468] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:01,303][00468] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop [2023-07-04 15:18:01,304][00468] Uncaught exception in Runner evt loop Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run evt_loop_status = self.event_loop.exec() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 403, in exec raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 399, in exec while self._loop_iteration(): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration self._process_signal(s) File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:01,307][00468] Runner profile tree view: main_loop: 0.0203 [2023-07-04 15:18:01,310][00468] Collected {}, FPS: 0.0 [2023-07-04 15:18:20,039][00468] Environment doom_basic already registered, overwriting... [2023-07-04 15:18:20,041][00468] Environment doom_two_colors_easy already registered, overwriting... [2023-07-04 15:18:20,043][00468] Environment doom_two_colors_hard already registered, overwriting... [2023-07-04 15:18:20,044][00468] Environment doom_dm already registered, overwriting... [2023-07-04 15:18:20,046][00468] Environment doom_dwango5 already registered, overwriting... [2023-07-04 15:18:20,047][00468] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-07-04 15:18:20,049][00468] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-07-04 15:18:20,050][00468] Environment doom_my_way_home already registered, overwriting... [2023-07-04 15:18:20,051][00468] Environment doom_deadly_corridor already registered, overwriting... [2023-07-04 15:18:20,052][00468] Environment doom_defend_the_center already registered, overwriting... [2023-07-04 15:18:20,053][00468] Environment doom_defend_the_line already registered, overwriting... [2023-07-04 15:18:20,054][00468] Environment doom_health_gathering already registered, overwriting... [2023-07-04 15:18:20,056][00468] Environment doom_health_gathering_supreme already registered, overwriting... [2023-07-04 15:18:20,057][00468] Environment doom_battle already registered, overwriting... [2023-07-04 15:18:20,058][00468] Environment doom_battle2 already registered, overwriting... [2023-07-04 15:18:20,059][00468] Environment doom_duel_bots already registered, overwriting... [2023-07-04 15:18:20,060][00468] Environment doom_deathmatch_bots already registered, overwriting... [2023-07-04 15:18:20,061][00468] Environment doom_duel already registered, overwriting... [2023-07-04 15:18:20,062][00468] Environment doom_deathmatch_full already registered, overwriting... [2023-07-04 15:18:20,064][00468] Environment doom_benchmark already registered, overwriting... [2023-07-04 15:18:20,065][00468] register_encoder_factory: [2023-07-04 15:18:20,087][00468] Loading existing experiment configuration from /content/train_dir/default_experiment/cfg.json [2023-07-04 15:18:20,099][00468] Experiment dir /content/train_dir/default_experiment already exists! [2023-07-04 15:18:20,100][00468] Resuming existing experiment from /content/train_dir/default_experiment... [2023-07-04 15:18:20,101][00468] Weights and Biases integration disabled [2023-07-04 15:18:20,104][00468] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-07-04 15:18:21,502][00468] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository train_script=.usr.local.lib.python3.10.dist-packages.ipykernel_launcher [2023-07-04 15:18:21,505][00468] Saving configuration to /content/train_dir/default_experiment/cfg.json... [2023-07-04 15:18:21,507][00468] Rollout worker 0 uses device cpu [2023-07-04 15:18:21,510][00468] Rollout worker 1 uses device cpu [2023-07-04 15:18:21,512][00468] Rollout worker 2 uses device cpu [2023-07-04 15:18:21,514][00468] Rollout worker 3 uses device cpu [2023-07-04 15:18:21,515][00468] Rollout worker 4 uses device cpu [2023-07-04 15:18:21,517][00468] Rollout worker 5 uses device cpu [2023-07-04 15:18:21,518][00468] Rollout worker 6 uses device cpu [2023-07-04 15:18:21,519][00468] Rollout worker 7 uses device cpu [2023-07-04 15:18:21,647][00468] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:18:21,649][00468] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:18:21,683][00468] Starting all processes... [2023-07-04 15:18:21,684][00468] Starting process learner_proc0 [2023-07-04 15:18:21,690][00468] EvtLoop [Runner_EvtLoop, process=main process 468] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:21,692][00468] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop [2023-07-04 15:18:21,696][00468] Uncaught exception in Runner evt loop Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run evt_loop_status = self.event_loop.exec() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 403, in exec raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 399, in exec while self._loop_iteration(): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration self._process_signal(s) File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:21,698][00468] Runner profile tree view: main_loop: 0.0147 [2023-07-04 15:18:21,699][00468] Collected {}, FPS: 0.0 [2023-07-04 15:18:34,241][00468] Environment doom_basic already registered, overwriting... [2023-07-04 15:18:34,244][00468] Environment doom_two_colors_easy already registered, overwriting... [2023-07-04 15:18:34,245][00468] Environment doom_two_colors_hard already registered, overwriting... [2023-07-04 15:18:34,251][00468] Environment doom_dm already registered, overwriting... [2023-07-04 15:18:34,252][00468] Environment doom_dwango5 already registered, overwriting... [2023-07-04 15:18:34,253][00468] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-07-04 15:18:34,255][00468] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-07-04 15:18:34,258][00468] Environment doom_my_way_home already registered, overwriting... [2023-07-04 15:18:34,259][00468] Environment doom_deadly_corridor already registered, overwriting... [2023-07-04 15:18:34,260][00468] Environment doom_defend_the_center already registered, overwriting... [2023-07-04 15:18:34,261][00468] Environment doom_defend_the_line already registered, overwriting... [2023-07-04 15:18:34,263][00468] Environment doom_health_gathering already registered, overwriting... [2023-07-04 15:18:34,264][00468] Environment doom_health_gathering_supreme already registered, overwriting... [2023-07-04 15:18:34,265][00468] Environment doom_battle already registered, overwriting... [2023-07-04 15:18:34,266][00468] Environment doom_battle2 already registered, overwriting... [2023-07-04 15:18:34,267][00468] Environment doom_duel_bots already registered, overwriting... [2023-07-04 15:18:34,268][00468] Environment doom_deathmatch_bots already registered, overwriting... [2023-07-04 15:18:34,270][00468] Environment doom_duel already registered, overwriting... [2023-07-04 15:18:34,271][00468] Environment doom_deathmatch_full already registered, overwriting... [2023-07-04 15:18:34,272][00468] Environment doom_benchmark already registered, overwriting... [2023-07-04 15:18:34,273][00468] register_encoder_factory: [2023-07-04 15:18:34,300][00468] Loading existing experiment configuration from /content/train_dir/default_experiment/cfg.json [2023-07-04 15:18:34,305][00468] Experiment dir /content/train_dir/default_experiment already exists! [2023-07-04 15:18:34,309][00468] Resuming existing experiment from /content/train_dir/default_experiment... [2023-07-04 15:18:34,310][00468] Weights and Biases integration disabled [2023-07-04 15:18:34,313][00468] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-07-04 15:18:35,720][00468] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository train_script=.usr.local.lib.python3.10.dist-packages.ipykernel_launcher [2023-07-04 15:18:35,721][00468] Saving configuration to /content/train_dir/default_experiment/cfg.json... [2023-07-04 15:18:35,728][00468] Rollout worker 0 uses device cpu [2023-07-04 15:18:35,729][00468] Rollout worker 1 uses device cpu [2023-07-04 15:18:35,731][00468] Rollout worker 2 uses device cpu [2023-07-04 15:18:35,732][00468] Rollout worker 3 uses device cpu [2023-07-04 15:18:35,734][00468] Rollout worker 4 uses device cpu [2023-07-04 15:18:35,736][00468] Rollout worker 5 uses device cpu [2023-07-04 15:18:35,737][00468] Rollout worker 6 uses device cpu [2023-07-04 15:18:35,739][00468] Rollout worker 7 uses device cpu [2023-07-04 15:18:35,866][00468] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:18:35,868][00468] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:18:35,898][00468] Starting all processes... [2023-07-04 15:18:35,901][00468] Starting process learner_proc0 [2023-07-04 15:18:35,905][00468] EvtLoop [Runner_EvtLoop, process=main process 468] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:35,907][00468] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop [2023-07-04 15:18:35,910][00468] Uncaught exception in Runner evt loop Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run evt_loop_status = self.event_loop.exec() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 403, in exec raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 399, in exec while self._loop_iteration(): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration self._process_signal(s) File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:35,915][00468] Runner profile tree view: main_loop: 0.0170 [2023-07-04 15:18:35,916][00468] Collected {}, FPS: 0.0 [2023-07-04 15:18:39,760][00468] Environment doom_basic already registered, overwriting... [2023-07-04 15:18:39,763][00468] Environment doom_two_colors_easy already registered, overwriting... [2023-07-04 15:18:39,765][00468] Environment doom_two_colors_hard already registered, overwriting... [2023-07-04 15:18:39,769][00468] Environment doom_dm already registered, overwriting... [2023-07-04 15:18:39,770][00468] Environment doom_dwango5 already registered, overwriting... [2023-07-04 15:18:39,772][00468] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-07-04 15:18:39,773][00468] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-07-04 15:18:39,774][00468] Environment doom_my_way_home already registered, overwriting... [2023-07-04 15:18:39,775][00468] Environment doom_deadly_corridor already registered, overwriting... [2023-07-04 15:18:39,776][00468] Environment doom_defend_the_center already registered, overwriting... [2023-07-04 15:18:39,778][00468] Environment doom_defend_the_line already registered, overwriting... [2023-07-04 15:18:39,778][00468] Environment doom_health_gathering already registered, overwriting... [2023-07-04 15:18:39,779][00468] Environment doom_health_gathering_supreme already registered, overwriting... [2023-07-04 15:18:39,780][00468] Environment doom_battle already registered, overwriting... [2023-07-04 15:18:39,781][00468] Environment doom_battle2 already registered, overwriting... [2023-07-04 15:18:39,782][00468] Environment doom_duel_bots already registered, overwriting... [2023-07-04 15:18:39,783][00468] Environment doom_deathmatch_bots already registered, overwriting... [2023-07-04 15:18:39,785][00468] Environment doom_duel already registered, overwriting... [2023-07-04 15:18:39,786][00468] Environment doom_deathmatch_full already registered, overwriting... [2023-07-04 15:18:39,787][00468] Environment doom_benchmark already registered, overwriting... [2023-07-04 15:18:39,788][00468] register_encoder_factory: [2023-07-04 15:18:39,818][00468] Loading existing experiment configuration from /content/train_dir/default_experiment/cfg.json [2023-07-04 15:18:39,828][00468] Experiment dir /content/train_dir/default_experiment already exists! [2023-07-04 15:18:39,835][00468] Resuming existing experiment from /content/train_dir/default_experiment... [2023-07-04 15:18:39,837][00468] Weights and Biases integration disabled [2023-07-04 15:18:39,841][00468] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-07-04 15:18:41,653][00468] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository train_script=.usr.local.lib.python3.10.dist-packages.ipykernel_launcher [2023-07-04 15:18:41,654][00468] Saving configuration to /content/train_dir/default_experiment/cfg.json... [2023-07-04 15:18:41,661][00468] Rollout worker 0 uses device cpu [2023-07-04 15:18:41,662][00468] Rollout worker 1 uses device cpu [2023-07-04 15:18:41,664][00468] Rollout worker 2 uses device cpu [2023-07-04 15:18:41,666][00468] Rollout worker 3 uses device cpu [2023-07-04 15:18:41,671][00468] Rollout worker 4 uses device cpu [2023-07-04 15:18:41,673][00468] Rollout worker 5 uses device cpu [2023-07-04 15:18:41,675][00468] Rollout worker 6 uses device cpu [2023-07-04 15:18:41,676][00468] Rollout worker 7 uses device cpu [2023-07-04 15:18:41,799][00468] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:18:41,801][00468] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:18:41,830][00468] Starting all processes... [2023-07-04 15:18:41,831][00468] Starting process learner_proc0 [2023-07-04 15:18:41,838][00468] EvtLoop [Runner_EvtLoop, process=main process 468] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:41,840][00468] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop [2023-07-04 15:18:41,842][00468] Uncaught exception in Runner evt loop Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run evt_loop_status = self.event_loop.exec() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 403, in exec raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 399, in exec while self._loop_iteration(): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration self._process_signal(s) File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:18:41,845][00468] Runner profile tree view: main_loop: 0.0146 [2023-07-04 15:18:41,847][00468] Collected {}, FPS: 0.0 [2023-07-04 15:19:10,892][00468] Environment doom_basic already registered, overwriting... [2023-07-04 15:19:10,905][00468] Environment doom_two_colors_easy already registered, overwriting... [2023-07-04 15:19:10,910][00468] Environment doom_two_colors_hard already registered, overwriting... [2023-07-04 15:19:10,917][00468] Environment doom_dm already registered, overwriting... [2023-07-04 15:19:10,920][00468] Environment doom_dwango5 already registered, overwriting... [2023-07-04 15:19:10,921][00468] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-07-04 15:19:10,924][00468] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-07-04 15:19:10,927][00468] Environment doom_my_way_home already registered, overwriting... [2023-07-04 15:19:10,929][00468] Environment doom_deadly_corridor already registered, overwriting... [2023-07-04 15:19:10,932][00468] Environment doom_defend_the_center already registered, overwriting... [2023-07-04 15:19:10,935][00468] Environment doom_defend_the_line already registered, overwriting... [2023-07-04 15:19:10,944][00468] Environment doom_health_gathering already registered, overwriting... [2023-07-04 15:19:10,945][00468] Environment doom_health_gathering_supreme already registered, overwriting... [2023-07-04 15:19:10,949][00468] Environment doom_battle already registered, overwriting... [2023-07-04 15:19:10,950][00468] Environment doom_battle2 already registered, overwriting... [2023-07-04 15:19:10,957][00468] Environment doom_duel_bots already registered, overwriting... [2023-07-04 15:19:10,958][00468] Environment doom_deathmatch_bots already registered, overwriting... [2023-07-04 15:19:10,959][00468] Environment doom_duel already registered, overwriting... [2023-07-04 15:19:10,965][00468] Environment doom_deathmatch_full already registered, overwriting... [2023-07-04 15:19:10,966][00468] Environment doom_benchmark already registered, overwriting... [2023-07-04 15:19:10,968][00468] register_encoder_factory: [2023-07-04 15:19:11,035][00468] Loading existing experiment configuration from /content/train_dir/default_experiment/cfg.json [2023-07-04 15:19:11,041][00468] Experiment dir /content/train_dir/default_experiment already exists! [2023-07-04 15:19:11,045][00468] Resuming existing experiment from /content/train_dir/default_experiment... [2023-07-04 15:19:11,050][00468] Weights and Biases integration disabled [2023-07-04 15:19:11,056][00468] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-07-04 15:19:13,557][00468] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository train_script=.usr.local.lib.python3.10.dist-packages.ipykernel_launcher [2023-07-04 15:19:13,564][00468] Saving configuration to /content/train_dir/default_experiment/cfg.json... [2023-07-04 15:19:13,569][00468] Rollout worker 0 uses device cpu [2023-07-04 15:19:13,572][00468] Rollout worker 1 uses device cpu [2023-07-04 15:19:13,574][00468] Rollout worker 2 uses device cpu [2023-07-04 15:19:13,576][00468] Rollout worker 3 uses device cpu [2023-07-04 15:19:13,581][00468] Rollout worker 4 uses device cpu [2023-07-04 15:19:13,584][00468] Rollout worker 5 uses device cpu [2023-07-04 15:19:13,586][00468] Rollout worker 6 uses device cpu [2023-07-04 15:19:13,591][00468] Rollout worker 7 uses device cpu [2023-07-04 15:19:13,873][00468] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:19:13,879][00468] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:19:13,918][00468] Starting all processes... [2023-07-04 15:19:13,922][00468] Starting process learner_proc0 [2023-07-04 15:19:13,929][00468] EvtLoop [Runner_EvtLoop, process=main process 468] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:19:13,933][00468] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop [2023-07-04 15:19:13,937][00468] Uncaught exception in Runner evt loop Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run evt_loop_status = self.event_loop.exec() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 403, in exec raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 399, in exec while self._loop_iteration(): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration self._process_signal(s) File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal raise exc File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start self._start_processes() File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes p.start() File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 515, in start self._process.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TLSBuffer' object [2023-07-04 15:19:13,942][00468] Runner profile tree view: main_loop: 0.0244 [2023-07-04 15:19:13,947][00468] Collected {}, FPS: 0.0 [2023-07-04 15:19:44,624][11762] Saving configuration to /content/train_dir/default_experiment/cfg.json... [2023-07-04 15:19:44,628][11762] Rollout worker 0 uses device cpu [2023-07-04 15:19:44,630][11762] Rollout worker 1 uses device cpu [2023-07-04 15:19:44,635][11762] Rollout worker 2 uses device cpu [2023-07-04 15:19:44,636][11762] Rollout worker 3 uses device cpu [2023-07-04 15:19:44,637][11762] Rollout worker 4 uses device cpu [2023-07-04 15:19:44,640][11762] Rollout worker 5 uses device cpu [2023-07-04 15:19:44,644][11762] Rollout worker 6 uses device cpu [2023-07-04 15:19:44,645][11762] Rollout worker 7 uses device cpu [2023-07-04 15:19:44,773][11762] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:19:44,774][11762] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:19:44,811][11762] Starting all processes... [2023-07-04 15:19:44,814][11762] Starting process learner_proc0 [2023-07-04 15:19:44,861][11762] Starting all processes... [2023-07-04 15:19:44,868][11762] Starting process inference_proc0-0 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc0 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc1 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc2 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc3 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc4 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc5 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc6 [2023-07-04 15:19:44,870][11762] Starting process rollout_proc7 [2023-07-04 15:19:55,959][11911] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:19:55,960][11911] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-07-04 15:19:56,006][11911] Num visible devices: 1 [2023-07-04 15:19:56,050][11911] Starting seed is not provided [2023-07-04 15:19:56,050][11911] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:19:56,050][11911] Initializing actor-critic model on device cuda:0 [2023-07-04 15:19:56,051][11911] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:19:56,052][11911] RunningMeanStd input shape: (1,) [2023-07-04 15:19:56,189][11911] ConvEncoder: input_channels=3 [2023-07-04 15:19:57,005][11930] Worker 5 uses CPU cores [1] [2023-07-04 15:19:57,040][11929] Worker 4 uses CPU cores [0] [2023-07-04 15:19:57,054][11931] Worker 6 uses CPU cores [0] [2023-07-04 15:19:57,159][11924] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:19:57,163][11924] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-07-04 15:19:57,180][11927] Worker 2 uses CPU cores [0] [2023-07-04 15:19:57,205][11928] Worker 3 uses CPU cores [1] [2023-07-04 15:19:57,232][11924] Num visible devices: 1 [2023-07-04 15:19:57,246][11932] Worker 7 uses CPU cores [1] [2023-07-04 15:19:57,272][11911] Conv encoder output size: 512 [2023-07-04 15:19:57,272][11911] Policy head output size: 512 [2023-07-04 15:19:57,290][11926] Worker 1 uses CPU cores [1] [2023-07-04 15:19:57,298][11911] Created Actor Critic model with architecture: [2023-07-04 15:19:57,298][11911] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-07-04 15:19:57,431][11925] Worker 0 uses CPU cores [0] [2023-07-04 15:20:02,307][11911] Using optimizer [2023-07-04 15:20:02,307][11911] No checkpoints found [2023-07-04 15:20:02,308][11911] Did not load from checkpoint, starting from scratch! [2023-07-04 15:20:02,308][11911] Initialized policy 0 weights for model version 0 [2023-07-04 15:20:02,312][11911] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:20:02,332][11911] LearnerWorker_p0 finished initialization! [2023-07-04 15:20:02,522][11924] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:20:02,523][11924] RunningMeanStd input shape: (1,) [2023-07-04 15:20:02,541][11924] ConvEncoder: input_channels=3 [2023-07-04 15:20:02,638][11924] Conv encoder output size: 512 [2023-07-04 15:20:02,638][11924] Policy head output size: 512 [2023-07-04 15:20:03,841][11762] Inference worker 0-0 is ready! [2023-07-04 15:20:03,845][11762] All inference workers are ready! Signal rollout workers to start! [2023-07-04 15:20:03,979][11926] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,008][11927] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,009][11930] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,011][11932] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,015][11928] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,024][11931] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,028][11929] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,057][11925] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:20:04,766][11762] Heartbeat connected on Batcher_0 [2023-07-04 15:20:04,772][11762] Heartbeat connected on LearnerWorker_p0 [2023-07-04 15:20:04,808][11762] Heartbeat connected on InferenceWorker_p0-w0 [2023-07-04 15:20:05,044][11930] Decorrelating experience for 0 frames... [2023-07-04 15:20:05,041][11926] Decorrelating experience for 0 frames... [2023-07-04 15:20:05,279][11925] Decorrelating experience for 0 frames... [2023-07-04 15:20:05,281][11929] Decorrelating experience for 0 frames... [2023-07-04 15:20:05,288][11927] Decorrelating experience for 0 frames... [2023-07-04 15:20:05,602][11930] Decorrelating experience for 32 frames... [2023-07-04 15:20:06,056][11929] Decorrelating experience for 32 frames... [2023-07-04 15:20:06,063][11927] Decorrelating experience for 32 frames... [2023-07-04 15:20:06,344][11932] Decorrelating experience for 0 frames... [2023-07-04 15:20:06,782][11926] Decorrelating experience for 32 frames... [2023-07-04 15:20:06,972][11930] Decorrelating experience for 64 frames... [2023-07-04 15:20:07,067][11762] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:20:07,798][11930] Decorrelating experience for 96 frames... [2023-07-04 15:20:08,081][11929] Decorrelating experience for 64 frames... [2023-07-04 15:20:08,083][11927] Decorrelating experience for 64 frames... [2023-07-04 15:20:08,246][11762] Heartbeat connected on RolloutWorker_w5 [2023-07-04 15:20:08,392][11931] Decorrelating experience for 0 frames... [2023-07-04 15:20:08,958][11925] Decorrelating experience for 32 frames... [2023-07-04 15:20:09,704][11926] Decorrelating experience for 64 frames... [2023-07-04 15:20:10,057][11928] Decorrelating experience for 0 frames... [2023-07-04 15:20:10,626][11929] Decorrelating experience for 96 frames... [2023-07-04 15:20:10,631][11927] Decorrelating experience for 96 frames... [2023-07-04 15:20:10,807][11931] Decorrelating experience for 32 frames... [2023-07-04 15:20:11,179][11762] Heartbeat connected on RolloutWorker_w4 [2023-07-04 15:20:11,188][11762] Heartbeat connected on RolloutWorker_w2 [2023-07-04 15:20:12,067][11762] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 3.2. Samples: 16. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:20:12,337][11926] Decorrelating experience for 96 frames... [2023-07-04 15:20:12,665][11932] Decorrelating experience for 32 frames... [2023-07-04 15:20:12,675][11928] Decorrelating experience for 32 frames... [2023-07-04 15:20:12,751][11762] Heartbeat connected on RolloutWorker_w1 [2023-07-04 15:20:14,110][11925] Decorrelating experience for 64 frames... [2023-07-04 15:20:17,070][11762] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 158.9. Samples: 1590. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:20:17,074][11762] Avg episode reward: [(0, '2.976')] [2023-07-04 15:20:17,667][11932] Decorrelating experience for 64 frames... [2023-07-04 15:20:17,670][11928] Decorrelating experience for 64 frames... [2023-07-04 15:20:17,892][11911] Signal inference workers to stop experience collection... [2023-07-04 15:20:17,915][11924] InferenceWorker_p0-w0: stopping experience collection [2023-07-04 15:20:18,587][11931] Decorrelating experience for 64 frames... [2023-07-04 15:20:18,666][11925] Decorrelating experience for 96 frames... [2023-07-04 15:20:18,824][11762] Heartbeat connected on RolloutWorker_w0 [2023-07-04 15:20:19,243][11931] Decorrelating experience for 96 frames... [2023-07-04 15:20:19,343][11762] Heartbeat connected on RolloutWorker_w6 [2023-07-04 15:20:19,357][11928] Decorrelating experience for 96 frames... [2023-07-04 15:20:19,371][11932] Decorrelating experience for 96 frames... [2023-07-04 15:20:19,485][11762] Heartbeat connected on RolloutWorker_w3 [2023-07-04 15:20:19,503][11762] Heartbeat connected on RolloutWorker_w7 [2023-07-04 15:20:19,609][11911] Signal inference workers to resume experience collection... [2023-07-04 15:20:19,610][11924] InferenceWorker_p0-w0: resuming experience collection [2023-07-04 15:20:22,067][11762] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 256.1. Samples: 3842. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2023-07-04 15:20:22,074][11762] Avg episode reward: [(0, '3.005')] [2023-07-04 15:20:27,067][11762] Fps is (10 sec: 3278.0, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 32768. Throughput: 0: 357.4. Samples: 7148. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-07-04 15:20:27,069][11762] Avg episode reward: [(0, '3.857')] [2023-07-04 15:20:28,255][11924] Updated weights for policy 0, policy_version 10 (0.0024) [2023-07-04 15:20:32,067][11762] Fps is (10 sec: 3686.4, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 49152. Throughput: 0: 520.9. Samples: 13022. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:20:32,069][11762] Avg episode reward: [(0, '4.392')] [2023-07-04 15:20:37,067][11762] Fps is (10 sec: 3276.8, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 578.5. Samples: 17354. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:20:37,073][11762] Avg episode reward: [(0, '4.397')] [2023-07-04 15:20:41,647][11924] Updated weights for policy 0, policy_version 20 (0.0035) [2023-07-04 15:20:42,069][11762] Fps is (10 sec: 3275.9, 60 sec: 2340.4, 300 sec: 2340.4). Total num frames: 81920. Throughput: 0: 556.9. Samples: 19492. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:20:42,075][11762] Avg episode reward: [(0, '4.379')] [2023-07-04 15:20:47,067][11762] Fps is (10 sec: 3686.4, 60 sec: 2560.0, 300 sec: 2560.0). Total num frames: 102400. Throughput: 0: 639.8. Samples: 25592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:20:47,073][11762] Avg episode reward: [(0, '4.485')] [2023-07-04 15:20:47,078][11911] Saving new best policy, reward=4.485! [2023-07-04 15:20:50,974][11924] Updated weights for policy 0, policy_version 30 (0.0015) [2023-07-04 15:20:52,067][11762] Fps is (10 sec: 4097.1, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 122880. Throughput: 0: 707.1. Samples: 31818. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:20:52,069][11762] Avg episode reward: [(0, '4.684')] [2023-07-04 15:20:52,081][11911] Saving new best policy, reward=4.684! [2023-07-04 15:20:57,067][11762] Fps is (10 sec: 3276.8, 60 sec: 2703.4, 300 sec: 2703.4). Total num frames: 135168. Throughput: 0: 743.4. Samples: 33470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:20:57,072][11762] Avg episode reward: [(0, '4.698')] [2023-07-04 15:20:57,082][11911] Saving new best policy, reward=4.698! [2023-07-04 15:21:02,067][11762] Fps is (10 sec: 2867.2, 60 sec: 2755.5, 300 sec: 2755.5). Total num frames: 151552. Throughput: 0: 802.8. Samples: 37712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:21:02,072][11762] Avg episode reward: [(0, '4.795')] [2023-07-04 15:21:02,085][11911] Saving new best policy, reward=4.795! [2023-07-04 15:21:05,105][11924] Updated weights for policy 0, policy_version 40 (0.0018) [2023-07-04 15:21:07,067][11762] Fps is (10 sec: 3686.4, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 172032. Throughput: 0: 880.7. Samples: 43474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:21:07,069][11762] Avg episode reward: [(0, '4.614')] [2023-07-04 15:21:12,067][11762] Fps is (10 sec: 4096.0, 60 sec: 3208.5, 300 sec: 2961.7). Total num frames: 192512. Throughput: 0: 881.9. Samples: 46832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:21:12,068][11762] Avg episode reward: [(0, '4.623')] [2023-07-04 15:21:14,527][11924] Updated weights for policy 0, policy_version 50 (0.0017) [2023-07-04 15:21:17,068][11762] Fps is (10 sec: 3685.8, 60 sec: 3481.7, 300 sec: 2984.2). Total num frames: 208896. Throughput: 0: 882.9. Samples: 52752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:21:17,071][11762] Avg episode reward: [(0, '4.688')] [2023-07-04 15:21:22,067][11762] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3003.7). Total num frames: 225280. Throughput: 0: 879.9. Samples: 56952. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-07-04 15:21:22,071][11762] Avg episode reward: [(0, '4.777')] [2023-07-04 15:21:27,071][11762] Fps is (10 sec: 3275.9, 60 sec: 3481.3, 300 sec: 3020.6). Total num frames: 241664. Throughput: 0: 878.5. Samples: 59024. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:21:27,079][11762] Avg episode reward: [(0, '4.760')] [2023-07-04 15:21:27,866][11924] Updated weights for policy 0, policy_version 60 (0.0018) [2023-07-04 15:21:32,067][11762] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3084.0). Total num frames: 262144. Throughput: 0: 886.1. Samples: 65466. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:21:32,073][11762] Avg episode reward: [(0, '5.215')] [2023-07-04 15:21:32,083][11911] Saving new best policy, reward=5.215! [2023-07-04 15:21:37,067][11762] Fps is (10 sec: 4097.8, 60 sec: 3618.1, 300 sec: 3140.3). Total num frames: 282624. Throughput: 0: 885.1. Samples: 71648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:21:37,075][11762] Avg episode reward: [(0, '5.326')] [2023-07-04 15:21:37,079][11911] Saving new best policy, reward=5.326! [2023-07-04 15:21:37,783][11924] Updated weights for policy 0, policy_version 70 (0.0016) [2023-07-04 15:21:42,070][11762] Fps is (10 sec: 3275.8, 60 sec: 3549.8, 300 sec: 3104.2). Total num frames: 294912. Throughput: 0: 892.0. Samples: 73612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:21:42,072][11762] Avg episode reward: [(0, '4.979')] [2023-07-04 15:21:42,081][11911] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth... [2023-07-04 15:21:47,067][11762] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3113.0). Total num frames: 311296. Throughput: 0: 895.1. Samples: 77992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:21:47,070][11762] Avg episode reward: [(0, '4.955')] [2023-07-04 15:21:50,537][11924] Updated weights for policy 0, policy_version 80 (0.0020) [2023-07-04 15:21:52,067][11762] Fps is (10 sec: 3687.5, 60 sec: 3481.6, 300 sec: 3159.8). Total num frames: 331776. Throughput: 0: 900.9. Samples: 84016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:21:52,075][11762] Avg episode reward: [(0, '4.940')] [2023-07-04 15:21:57,067][11762] Fps is (10 sec: 4505.7, 60 sec: 3686.4, 300 sec: 3239.6). Total num frames: 356352. Throughput: 0: 902.0. Samples: 87422. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:21:57,074][11762] Avg episode reward: [(0, '4.783')] [2023-07-04 15:22:00,514][11924] Updated weights for policy 0, policy_version 90 (0.0013) [2023-07-04 15:22:02,067][11762] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3205.6). Total num frames: 368640. Throughput: 0: 893.8. Samples: 92970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:22:02,072][11762] Avg episode reward: [(0, '4.892')] [2023-07-04 15:22:07,067][11762] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3208.5). Total num frames: 385024. Throughput: 0: 896.2. Samples: 97280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:22:07,072][11762] Avg episode reward: [(0, '5.158')] [2023-07-04 15:22:12,067][11762] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3211.3). Total num frames: 401408. Throughput: 0: 899.7. Samples: 99506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:22:12,069][11762] Avg episode reward: [(0, '5.560')] [2023-07-04 15:22:12,152][11911] Saving new best policy, reward=5.560! [2023-07-04 15:22:13,078][11924] Updated weights for policy 0, policy_version 100 (0.0018) [2023-07-04 15:22:17,067][11762] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3276.8). Total num frames: 425984. Throughput: 0: 898.8. Samples: 105912. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:22:17,076][11762] Avg episode reward: [(0, '5.886')] [2023-07-04 15:22:17,081][11911] Saving new best policy, reward=5.886! [2023-07-04 15:22:22,067][11762] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3307.1). Total num frames: 446464. Throughput: 0: 898.5. Samples: 112082. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:22:22,073][11762] Avg episode reward: [(0, '5.793')] [2023-07-04 15:22:23,609][11924] Updated weights for policy 0, policy_version 110 (0.0014) [2023-07-04 15:22:27,070][11762] Fps is (10 sec: 3275.7, 60 sec: 3618.2, 300 sec: 3276.7). Total num frames: 458752. Throughput: 0: 901.9. Samples: 114198. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:22:27,079][11762] Avg episode reward: [(0, '5.643')] [2023-07-04 15:22:32,067][11762] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 475136. Throughput: 0: 900.9. Samples: 118532. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:22:32,074][11762] Avg episode reward: [(0, '5.685')] [2023-07-04 15:22:35,764][11924] Updated weights for policy 0, policy_version 120 (0.0017) [2023-07-04 15:22:37,067][11762] Fps is (10 sec: 3687.7, 60 sec: 3549.9, 300 sec: 3304.1). Total num frames: 495616. Throughput: 0: 903.1. Samples: 124656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:22:37,068][11762] Avg episode reward: [(0, '5.803')] [2023-07-04 15:22:42,067][11762] Fps is (10 sec: 4096.0, 60 sec: 3686.6, 300 sec: 3329.7). Total num frames: 516096. Throughput: 0: 903.3. Samples: 128070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:22:42,069][11762] Avg episode reward: [(0, '6.133')] [2023-07-04 15:22:42,127][11911] Saving new best policy, reward=6.133! [2023-07-04 15:22:46,005][11924] Updated weights for policy 0, policy_version 130 (0.0013) [2023-07-04 15:22:47,068][11762] Fps is (10 sec: 3685.8, 60 sec: 3686.3, 300 sec: 3328.0). Total num frames: 532480. Throughput: 0: 903.2. Samples: 133614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:22:47,070][11762] Avg episode reward: [(0, '6.713')] [2023-07-04 15:22:47,080][11911] Saving new best policy, reward=6.713! [2023-07-04 15:22:52,067][11762] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3326.4). Total num frames: 548864. Throughput: 0: 901.5. Samples: 137848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:22:52,077][11762] Avg episode reward: [(0, '7.141')] [2023-07-04 15:22:52,089][11911] Saving new best policy, reward=7.141! [2023-07-04 15:22:57,067][11762] Fps is (10 sec: 3277.3, 60 sec: 3481.6, 300 sec: 3325.0). Total num frames: 565248. Throughput: 0: 901.6. Samples: 140076. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:22:57,074][11762] Avg episode reward: [(0, '7.291')] [2023-07-04 15:22:57,079][11911] Saving new best policy, reward=7.291! [2023-07-04 15:22:58,304][11924] Updated weights for policy 0, policy_version 140 (0.0018) [2023-07-04 15:23:02,067][11762] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3370.4). Total num frames: 589824. Throughput: 0: 910.4. Samples: 146882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:23:02,073][11762] Avg episode reward: [(0, '7.595')] [2023-07-04 15:23:02,083][11911] Saving new best policy, reward=7.595! [2023-07-04 15:23:07,067][11762] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3390.6). Total num frames: 610304. Throughput: 0: 910.0. Samples: 153030. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:23:07,073][11762] Avg episode reward: [(0, '7.373')] [2023-07-04 15:23:08,406][11924] Updated weights for policy 0, policy_version 150 (0.0019) [2023-07-04 15:23:12,068][11762] Fps is (10 sec: 3276.3, 60 sec: 3686.3, 300 sec: 3365.3). Total num frames: 622592. Throughput: 0: 909.6. Samples: 155130. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:23:12,078][11762] Avg episode reward: [(0, '7.173')] [2023-07-04 15:23:17,067][11762] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3363.0). Total num frames: 638976. Throughput: 0: 911.6. Samples: 159552. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:23:17,075][11762] Avg episode reward: [(0, '6.405')] [2023-07-04 15:23:20,851][11924] Updated weights for policy 0, policy_version 160 (0.0015) [2023-07-04 15:23:22,067][11762] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3381.8). Total num frames: 659456. Throughput: 0: 909.0. Samples: 165562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:23:22,072][11762] Avg episode reward: [(0, '6.512')] [2023-07-04 15:23:27,067][11762] Fps is (10 sec: 4096.0, 60 sec: 3686.6, 300 sec: 3399.7). Total num frames: 679936. Throughput: 0: 907.8. Samples: 168920. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:23:27,072][11762] Avg episode reward: [(0, '7.027')] [2023-07-04 15:23:31,154][11924] Updated weights for policy 0, policy_version 170 (0.0022) [2023-07-04 15:23:32,072][11762] Fps is (10 sec: 3684.6, 60 sec: 3686.1, 300 sec: 3396.6). Total num frames: 696320. Throughput: 0: 906.5. Samples: 174410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:23:32,089][11762] Avg episode reward: [(0, '7.720')] [2023-07-04 15:23:32,100][11911] Saving new best policy, reward=7.720! [2023-07-04 15:23:37,067][11762] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3393.8). Total num frames: 712704. Throughput: 0: 907.9. Samples: 178704. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:23:37,074][11762] Avg episode reward: [(0, '7.966')] [2023-07-04 15:23:37,076][11911] Saving new best policy, reward=7.966! [2023-07-04 15:23:42,067][11762] Fps is (10 sec: 3278.4, 60 sec: 3549.9, 300 sec: 3391.1). Total num frames: 729088. Throughput: 0: 905.2. Samples: 180810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:23:42,071][11762] Avg episode reward: [(0, '8.377')] [2023-07-04 15:23:42,083][11911] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000178_729088.pth... [2023-07-04 15:23:42,197][11911] Saving new best policy, reward=8.377! [2023-07-04 15:23:43,776][11924] Updated weights for policy 0, policy_version 180 (0.0014) [2023-07-04 15:23:47,067][11762] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3407.1). Total num frames: 749568. Throughput: 0: 897.4. Samples: 187266. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:23:47,069][11762] Avg episode reward: [(0, '8.528')] [2023-07-04 15:23:47,074][11911] Saving new best policy, reward=8.528! [2023-07-04 15:23:52,067][11762] Fps is (10 sec: 4095.7, 60 sec: 3686.4, 300 sec: 3422.4). Total num frames: 770048. Throughput: 0: 898.7. Samples: 193470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:23:52,070][11762] Avg episode reward: [(0, '8.678')] [2023-07-04 15:23:52,076][11911] Saving new best policy, reward=8.678! [2023-07-04 15:23:54,221][11924] Updated weights for policy 0, policy_version 190 (0.0012) [2023-07-04 15:23:57,067][11762] Fps is (10 sec: 3686.2, 60 sec: 3686.4, 300 sec: 3419.3). Total num frames: 786432. Throughput: 0: 900.2. Samples: 195636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:23:57,070][11762] Avg episode reward: [(0, '8.520')] [2023-07-04 15:24:02,067][11762] Fps is (10 sec: 2867.4, 60 sec: 3481.6, 300 sec: 3398.8). Total num frames: 798720. Throughput: 0: 901.2. Samples: 200106. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:24:02,069][11762] Avg episode reward: [(0, '8.505')] [2023-07-04 15:24:06,175][11924] Updated weights for policy 0, policy_version 200 (0.0027) [2023-07-04 15:24:07,067][11762] Fps is (10 sec: 3277.0, 60 sec: 3481.6, 300 sec: 3413.3). Total num frames: 819200. Throughput: 0: 900.4. Samples: 206082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:24:07,072][11762] Avg episode reward: [(0, '8.831')] [2023-07-04 15:24:07,110][11911] Saving new best policy, reward=8.831! [2023-07-04 15:24:12,067][11762] Fps is (10 sec: 4505.6, 60 sec: 3686.5, 300 sec: 3444.0). Total num frames: 843776. Throughput: 0: 899.1. Samples: 209378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:24:12,069][11762] Avg episode reward: [(0, '9.321')] [2023-07-04 15:24:12,079][11911] Saving new best policy, reward=9.321! [2023-07-04 15:24:16,563][11924] Updated weights for policy 0, policy_version 210 (0.0018) [2023-07-04 15:24:17,067][11762] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3440.6). Total num frames: 860160. Throughput: 0: 901.1. Samples: 214954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:24:17,072][11762] Avg episode reward: [(0, '9.657')] [2023-07-04 15:24:17,076][11911] Saving new best policy, reward=9.657! [2023-07-04 15:24:22,067][11762] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3421.4). Total num frames: 872448. Throughput: 0: 900.2. Samples: 219214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:24:22,073][11762] Avg episode reward: [(0, '9.752')] [2023-07-04 15:24:22,089][11911] Saving new best policy, reward=9.752! [2023-07-04 15:24:27,067][11762] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3418.6). Total num frames: 888832. Throughput: 0: 893.6. Samples: 221022. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:24:27,072][11762] Avg episode reward: [(0, '10.147')] [2023-07-04 15:24:27,075][11911] Saving new best policy, reward=10.147! [2023-07-04 15:24:30,142][11924] Updated weights for policy 0, policy_version 220 (0.0024) [2023-07-04 15:24:32,067][11762] Fps is (10 sec: 3686.4, 60 sec: 3550.2, 300 sec: 3431.4). Total num frames: 909312. Throughput: 0: 876.0. Samples: 226684. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:24:32,068][11762] Avg episode reward: [(0, '9.975')] [2023-07-04 15:24:37,067][11762] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3428.5). Total num frames: 925696. Throughput: 0: 875.3. Samples: 232860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:24:37,069][11762] Avg episode reward: [(0, '11.119')] [2023-07-04 15:24:37,161][11911] Saving new best policy, reward=11.119! [2023-07-04 15:24:41,643][11924] Updated weights for policy 0, policy_version 230 (0.0024) [2023-07-04 15:24:42,067][11762] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3425.7). Total num frames: 942080. Throughput: 0: 871.5. Samples: 234854. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:24:42,073][11762] Avg episode reward: [(0, '11.602')] [2023-07-04 15:24:42,081][11911] Saving new best policy, reward=11.602! [2023-07-04 15:24:42,461][11762] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 11762], exiting... [2023-07-04 15:24:42,468][11911] Stopping Batcher_0... [2023-07-04 15:24:42,470][11911] Loop batcher_evt_loop terminating... [2023-07-04 15:24:42,467][11762] Runner profile tree view: main_loop: 297.6563 [2023-07-04 15:24:42,471][11911] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000230_942080.pth... [2023-07-04 15:24:42,470][11762] Collected {0: 942080}, FPS: 3165.0 [2023-07-04 15:24:42,542][11931] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance6'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,562][11931] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc6_evt_loop [2023-07-04 15:24:42,538][11932] EvtLoop [rollout_proc7_evt_loop, process=rollout_proc7] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance7'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,499][11930] EvtLoop [rollout_proc5_evt_loop, process=rollout_proc5] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance5'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,610][11924] Weights refcount: 2 0 [2023-07-04 15:24:42,504][11926] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance1'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,619][11926] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc1_evt_loop [2023-07-04 15:24:42,565][11932] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc7_evt_loop [2023-07-04 15:24:42,645][11924] Stopping InferenceWorker_p0-w0... [2023-07-04 15:24:42,645][11924] Loop inference_proc0-0_evt_loop terminating... [2023-07-04 15:24:42,591][11930] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc5_evt_loop [2023-07-04 15:24:42,557][11928] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance3'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,653][11928] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc3_evt_loop [2023-07-04 15:24:42,603][11927] EvtLoop [rollout_proc2_evt_loop, process=rollout_proc2] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance2'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,659][11927] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc2_evt_loop [2023-07-04 15:24:42,650][11929] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance4'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,684][11925] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance0'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 632, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 384, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gym/core.py", line 319, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:24:42,796][11925] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc0_evt_loop [2023-07-04 15:24:42,748][11929] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc4_evt_loop [2023-07-04 15:24:42,992][11911] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth [2023-07-04 15:24:43,027][11911] Stopping LearnerWorker_p0... [2023-07-04 15:24:43,032][11911] Loop learner_proc0_evt_loop terminating... [2023-07-04 15:25:30,346][17091] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-07-04 15:25:30,349][17091] Rollout worker 0 uses device cpu [2023-07-04 15:25:30,350][17091] Rollout worker 1 uses device cpu [2023-07-04 15:25:30,351][17091] Rollout worker 2 uses device cpu [2023-07-04 15:25:30,352][17091] Rollout worker 3 uses device cpu [2023-07-04 15:25:30,355][17091] Rollout worker 4 uses device cpu [2023-07-04 15:25:30,356][17091] Rollout worker 5 uses device cpu [2023-07-04 15:25:30,357][17091] Rollout worker 6 uses device cpu [2023-07-04 15:25:30,358][17091] Rollout worker 7 uses device cpu [2023-07-04 15:25:30,541][17091] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:25:30,546][17091] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:25:30,587][17091] Starting all processes... [2023-07-04 15:25:30,591][17091] Starting process learner_proc0 [2023-07-04 15:25:30,659][17091] Starting all processes... [2023-07-04 15:25:30,675][17091] Starting process inference_proc0-0 [2023-07-04 15:25:30,676][17091] Starting process rollout_proc0 [2023-07-04 15:25:30,680][17091] Starting process rollout_proc1 [2023-07-04 15:25:30,680][17091] Starting process rollout_proc2 [2023-07-04 15:25:30,689][17091] Starting process rollout_proc3 [2023-07-04 15:25:30,689][17091] Starting process rollout_proc4 [2023-07-04 15:25:30,689][17091] Starting process rollout_proc5 [2023-07-04 15:25:30,689][17091] Starting process rollout_proc6 [2023-07-04 15:25:30,689][17091] Starting process rollout_proc7 [2023-07-04 15:25:41,907][17310] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:25:41,908][17310] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-07-04 15:25:41,966][17310] Num visible devices: 1 [2023-07-04 15:25:41,985][17326] Worker 2 uses CPU cores [0] [2023-07-04 15:25:42,001][17310] Starting seed is not provided [2023-07-04 15:25:42,002][17310] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:25:42,003][17310] Initializing actor-critic model on device cuda:0 [2023-07-04 15:25:42,004][17310] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:25:42,005][17310] RunningMeanStd input shape: (1,) [2023-07-04 15:25:42,038][17331] Worker 7 uses CPU cores [1] [2023-07-04 15:25:42,094][17310] ConvEncoder: input_channels=3 [2023-07-04 15:25:42,254][17324] Worker 0 uses CPU cores [0] [2023-07-04 15:25:42,344][17327] Worker 3 uses CPU cores [1] [2023-07-04 15:25:42,373][17325] Worker 1 uses CPU cores [1] [2023-07-04 15:25:42,379][17330] Worker 6 uses CPU cores [0] [2023-07-04 15:25:42,400][17323] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:25:42,401][17323] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-07-04 15:25:42,411][17329] Worker 5 uses CPU cores [1] [2023-07-04 15:25:42,431][17323] Num visible devices: 1 [2023-07-04 15:25:42,448][17328] Worker 4 uses CPU cores [0] [2023-07-04 15:25:42,499][17310] Conv encoder output size: 512 [2023-07-04 15:25:42,499][17310] Policy head output size: 512 [2023-07-04 15:25:42,513][17310] Created Actor Critic model with architecture: [2023-07-04 15:25:42,513][17310] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-07-04 15:25:44,870][17310] Using optimizer [2023-07-04 15:25:44,871][17310] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000230_942080.pth... [2023-07-04 15:25:44,903][17310] Loading model from checkpoint [2023-07-04 15:25:44,907][17310] Loaded experiment state at self.train_step=230, self.env_steps=942080 [2023-07-04 15:25:44,908][17310] Initialized policy 0 weights for model version 230 [2023-07-04 15:25:44,912][17310] LearnerWorker_p0 finished initialization! [2023-07-04 15:25:44,912][17310] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:25:45,101][17323] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:25:45,102][17323] RunningMeanStd input shape: (1,) [2023-07-04 15:25:45,114][17323] ConvEncoder: input_channels=3 [2023-07-04 15:25:45,215][17323] Conv encoder output size: 512 [2023-07-04 15:25:45,215][17323] Policy head output size: 512 [2023-07-04 15:25:46,687][17091] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 17091], exiting... [2023-07-04 15:25:46,690][17328] Stopping RolloutWorker_w4... [2023-07-04 15:25:46,690][17328] Loop rollout_proc4_evt_loop terminating... [2023-07-04 15:25:46,690][17310] Stopping Batcher_0... [2023-07-04 15:25:46,691][17310] Loop batcher_evt_loop terminating... [2023-07-04 15:25:46,692][17310] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000230_942080.pth... [2023-07-04 15:25:46,692][17324] Stopping RolloutWorker_w0... [2023-07-04 15:25:46,692][17324] Loop rollout_proc0_evt_loop terminating... [2023-07-04 15:25:46,693][17330] Stopping RolloutWorker_w6... [2023-07-04 15:25:46,694][17330] Loop rollout_proc6_evt_loop terminating... [2023-07-04 15:25:46,695][17326] Stopping RolloutWorker_w2... [2023-07-04 15:25:46,696][17326] Loop rollout_proc2_evt_loop terminating... [2023-07-04 15:25:46,690][17091] Runner profile tree view: main_loop: 16.1026 [2023-07-04 15:25:46,704][17091] Collected {0: 942080}, FPS: 0.0 [2023-07-04 15:25:46,712][17329] Stopping RolloutWorker_w5... [2023-07-04 15:25:46,704][17325] Stopping RolloutWorker_w1... [2023-07-04 15:25:46,706][17327] Stopping RolloutWorker_w3... [2023-07-04 15:25:46,718][17331] Stopping RolloutWorker_w7... [2023-07-04 15:25:46,720][17325] Loop rollout_proc1_evt_loop terminating... [2023-07-04 15:25:46,736][17329] Loop rollout_proc5_evt_loop terminating... [2023-07-04 15:25:46,734][17327] Loop rollout_proc3_evt_loop terminating... [2023-07-04 15:25:46,736][17331] Loop rollout_proc7_evt_loop terminating... [2023-07-04 15:25:46,778][17323] Weights refcount: 2 0 [2023-07-04 15:25:46,775][17091] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-07-04 15:25:46,781][17091] Overriding arg 'num_workers' with value 1 passed from command line [2023-07-04 15:25:46,784][17091] Adding new argument 'no_render'=True that is not in the saved config file! [2023-07-04 15:25:46,786][17091] Adding new argument 'save_video'=True that is not in the saved config file! [2023-07-04 15:25:46,788][17091] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-07-04 15:25:46,789][17091] Adding new argument 'video_name'=None that is not in the saved config file! [2023-07-04 15:25:46,790][17323] Stopping InferenceWorker_p0-w0... [2023-07-04 15:25:46,793][17323] Loop inference_proc0-0_evt_loop terminating... [2023-07-04 15:25:46,791][17091] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-07-04 15:25:46,795][17091] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-07-04 15:25:46,797][17091] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-07-04 15:25:46,799][17091] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-07-04 15:25:46,804][17091] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-07-04 15:25:46,805][17091] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-07-04 15:25:46,810][17091] Adding new argument 'train_script'=None that is not in the saved config file! [2023-07-04 15:25:46,812][17091] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-07-04 15:25:46,813][17091] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-07-04 15:25:46,875][17091] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:25:46,883][17091] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:25:46,892][17091] RunningMeanStd input shape: (1,) [2023-07-04 15:25:46,973][17091] ConvEncoder: input_channels=3 [2023-07-04 15:25:46,991][17310] Stopping LearnerWorker_p0... [2023-07-04 15:25:46,999][17310] Loop learner_proc0_evt_loop terminating... [2023-07-04 15:25:47,450][17091] Conv encoder output size: 512 [2023-07-04 15:25:47,458][17091] Policy head output size: 512 [2023-07-04 15:25:53,319][17091] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000230_942080.pth... [2023-07-04 15:25:54,502][17091] Num frames 100... [2023-07-04 15:25:54,622][17091] Num frames 200... [2023-07-04 15:25:54,747][17091] Num frames 300... [2023-07-04 15:25:54,891][17091] Num frames 400... [2023-07-04 15:25:55,016][17091] Num frames 500... [2023-07-04 15:25:55,138][17091] Num frames 600... [2023-07-04 15:25:55,205][17091] Avg episode rewards: #0: 9.080, true rewards: #0: 6.080 [2023-07-04 15:25:55,208][17091] Avg episode reward: 9.080, avg true_objective: 6.080 [2023-07-04 15:25:55,327][17091] Num frames 700... [2023-07-04 15:25:55,456][17091] Num frames 800... [2023-07-04 15:25:55,578][17091] Num frames 900... [2023-07-04 15:25:55,712][17091] Num frames 1000... [2023-07-04 15:25:55,837][17091] Num frames 1100... [2023-07-04 15:25:55,972][17091] Num frames 1200... [2023-07-04 15:25:56,106][17091] Num frames 1300... [2023-07-04 15:25:56,242][17091] Num frames 1400... [2023-07-04 15:25:56,357][17091] Avg episode rewards: #0: 12.710, true rewards: #0: 7.210 [2023-07-04 15:25:56,359][17091] Avg episode reward: 12.710, avg true_objective: 7.210 [2023-07-04 15:25:56,447][17091] Num frames 1500... [2023-07-04 15:25:56,584][17091] Num frames 1600... [2023-07-04 15:25:56,708][17091] Num frames 1700... [2023-07-04 15:25:56,839][17091] Num frames 1800... [2023-07-04 15:25:56,970][17091] Num frames 1900... [2023-07-04 15:25:57,103][17091] Num frames 2000... [2023-07-04 15:25:57,236][17091] Num frames 2100... [2023-07-04 15:25:57,357][17091] Num frames 2200... [2023-07-04 15:25:57,483][17091] Num frames 2300... [2023-07-04 15:25:57,610][17091] Num frames 2400... [2023-07-04 15:25:57,730][17091] Num frames 2500... [2023-07-04 15:25:57,902][17091] Avg episode rewards: #0: 16.647, true rewards: #0: 8.647 [2023-07-04 15:25:57,905][17091] Avg episode reward: 16.647, avg true_objective: 8.647 [2023-07-04 15:25:57,917][17091] Num frames 2600... [2023-07-04 15:25:58,039][17091] Num frames 2700... [2023-07-04 15:25:58,162][17091] Num frames 2800... [2023-07-04 15:25:58,300][17091] Num frames 2900... [2023-07-04 15:25:58,423][17091] Num frames 3000... [2023-07-04 15:25:58,550][17091] Num frames 3100... [2023-07-04 15:25:58,670][17091] Num frames 3200... [2023-07-04 15:25:58,798][17091] Num frames 3300... [2023-07-04 15:25:58,930][17091] Num frames 3400... [2023-07-04 15:25:59,048][17091] Num frames 3500... [2023-07-04 15:25:59,173][17091] Num frames 3600... [2023-07-04 15:25:59,295][17091] Num frames 3700... [2023-07-04 15:25:59,417][17091] Num frames 3800... [2023-07-04 15:25:59,535][17091] Num frames 3900... [2023-07-04 15:25:59,600][17091] Avg episode rewards: #0: 19.765, true rewards: #0: 9.765 [2023-07-04 15:25:59,601][17091] Avg episode reward: 19.765, avg true_objective: 9.765 [2023-07-04 15:25:59,714][17091] Num frames 4000... [2023-07-04 15:25:59,841][17091] Num frames 4100... [2023-07-04 15:25:59,964][17091] Num frames 4200... [2023-07-04 15:26:00,135][17091] Avg episode rewards: #0: 16.580, true rewards: #0: 8.580 [2023-07-04 15:26:00,137][17091] Avg episode reward: 16.580, avg true_objective: 8.580 [2023-07-04 15:26:00,152][17091] Num frames 4300... [2023-07-04 15:26:00,280][17091] Num frames 4400... [2023-07-04 15:26:00,407][17091] Num frames 4500... [2023-07-04 15:26:00,534][17091] Num frames 4600... [2023-07-04 15:26:00,655][17091] Num frames 4700... [2023-07-04 15:26:00,756][17091] Avg episode rewards: #0: 14.730, true rewards: #0: 7.897 [2023-07-04 15:26:00,758][17091] Avg episode reward: 14.730, avg true_objective: 7.897 [2023-07-04 15:26:00,834][17091] Num frames 4800... [2023-07-04 15:26:00,964][17091] Num frames 4900... [2023-07-04 15:26:01,087][17091] Num frames 5000... [2023-07-04 15:26:01,213][17091] Num frames 5100... [2023-07-04 15:26:01,336][17091] Num frames 5200... [2023-07-04 15:26:01,464][17091] Num frames 5300... [2023-07-04 15:26:01,583][17091] Num frames 5400... [2023-07-04 15:26:01,705][17091] Num frames 5500... [2023-07-04 15:26:01,833][17091] Num frames 5600... [2023-07-04 15:26:01,932][17091] Avg episode rewards: #0: 15.334, true rewards: #0: 8.049 [2023-07-04 15:26:01,934][17091] Avg episode reward: 15.334, avg true_objective: 8.049 [2023-07-04 15:26:02,019][17091] Num frames 5700... [2023-07-04 15:26:02,142][17091] Num frames 5800... [2023-07-04 15:26:02,264][17091] Num frames 5900... [2023-07-04 15:26:02,394][17091] Num frames 6000... [2023-07-04 15:26:02,512][17091] Num frames 6100... [2023-07-04 15:26:02,642][17091] Num frames 6200... [2023-07-04 15:26:02,768][17091] Num frames 6300... [2023-07-04 15:26:02,909][17091] Num frames 6400... [2023-07-04 15:26:02,970][17091] Avg episode rewards: #0: 15.253, true rewards: #0: 8.002 [2023-07-04 15:26:02,972][17091] Avg episode reward: 15.253, avg true_objective: 8.002 [2023-07-04 15:26:03,113][17091] Num frames 6500... [2023-07-04 15:26:03,236][17091] Num frames 6600... [2023-07-04 15:26:03,360][17091] Num frames 6700... [2023-07-04 15:26:03,462][17091] Avg episode rewards: #0: 14.153, true rewards: #0: 7.487 [2023-07-04 15:26:03,463][17091] Avg episode reward: 14.153, avg true_objective: 7.487 [2023-07-04 15:26:03,541][17091] Num frames 6800... [2023-07-04 15:26:03,717][17091] Num frames 6900... [2023-07-04 15:26:03,899][17091] Num frames 7000... [2023-07-04 15:26:04,077][17091] Num frames 7100... [2023-07-04 15:26:04,250][17091] Num frames 7200... [2023-07-04 15:26:04,422][17091] Num frames 7300... [2023-07-04 15:26:04,595][17091] Num frames 7400... [2023-07-04 15:26:04,771][17091] Num frames 7500... [2023-07-04 15:26:04,949][17091] Num frames 7600... [2023-07-04 15:26:05,135][17091] Num frames 7700... [2023-07-04 15:26:05,319][17091] Num frames 7800... [2023-07-04 15:26:05,491][17091] Num frames 7900... [2023-07-04 15:26:05,667][17091] Num frames 8000... [2023-07-04 15:26:05,757][17091] Avg episode rewards: #0: 15.318, true rewards: #0: 8.018 [2023-07-04 15:26:05,760][17091] Avg episode reward: 15.318, avg true_objective: 8.018 [2023-07-04 15:26:55,950][17091] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-07-04 15:26:55,980][17091] Environment doom_basic already registered, overwriting... [2023-07-04 15:26:55,986][17091] Environment doom_two_colors_easy already registered, overwriting... [2023-07-04 15:26:55,988][17091] Environment doom_two_colors_hard already registered, overwriting... [2023-07-04 15:26:55,991][17091] Environment doom_dm already registered, overwriting... [2023-07-04 15:26:55,992][17091] Environment doom_dwango5 already registered, overwriting... [2023-07-04 15:26:55,993][17091] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-07-04 15:26:55,997][17091] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-07-04 15:26:55,998][17091] Environment doom_my_way_home already registered, overwriting... [2023-07-04 15:26:55,999][17091] Environment doom_deadly_corridor already registered, overwriting... [2023-07-04 15:26:56,000][17091] Environment doom_defend_the_center already registered, overwriting... [2023-07-04 15:26:56,001][17091] Environment doom_defend_the_line already registered, overwriting... [2023-07-04 15:26:56,003][17091] Environment doom_health_gathering already registered, overwriting... [2023-07-04 15:26:56,004][17091] Environment doom_health_gathering_supreme already registered, overwriting... [2023-07-04 15:26:56,006][17091] Environment doom_battle already registered, overwriting... [2023-07-04 15:26:56,008][17091] Environment doom_battle2 already registered, overwriting... [2023-07-04 15:26:56,009][17091] Environment doom_duel_bots already registered, overwriting... [2023-07-04 15:26:56,011][17091] Environment doom_deathmatch_bots already registered, overwriting... [2023-07-04 15:26:56,012][17091] Environment doom_duel already registered, overwriting... [2023-07-04 15:26:56,013][17091] Environment doom_deathmatch_full already registered, overwriting... [2023-07-04 15:26:56,015][17091] Environment doom_benchmark already registered, overwriting... [2023-07-04 15:26:56,016][17091] register_encoder_factory: [2023-07-04 15:26:56,040][17091] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-07-04 15:26:56,053][17091] Experiment dir /content/train_dir/default_experiment already exists! [2023-07-04 15:26:56,055][17091] Resuming existing experiment from /content/train_dir/default_experiment... [2023-07-04 15:26:56,056][17091] Weights and Biases integration disabled [2023-07-04 15:26:56,061][17091] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-07-04 15:26:57,668][17091] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository [2023-07-04 15:26:57,670][17091] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-07-04 15:26:57,673][17091] Rollout worker 0 uses device cpu [2023-07-04 15:26:57,675][17091] Rollout worker 1 uses device cpu [2023-07-04 15:26:57,677][17091] Rollout worker 2 uses device cpu [2023-07-04 15:26:57,678][17091] Rollout worker 3 uses device cpu [2023-07-04 15:26:57,679][17091] Rollout worker 4 uses device cpu [2023-07-04 15:26:57,681][17091] Rollout worker 5 uses device cpu [2023-07-04 15:26:57,683][17091] Rollout worker 6 uses device cpu [2023-07-04 15:26:57,684][17091] Rollout worker 7 uses device cpu [2023-07-04 15:26:57,792][17091] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:26:57,793][17091] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:26:57,824][17091] Starting all processes... [2023-07-04 15:26:57,825][17091] Starting process learner_proc0 [2023-07-04 15:26:57,878][17091] Starting all processes... [2023-07-04 15:26:57,883][17091] Starting process inference_proc0-0 [2023-07-04 15:26:57,884][17091] Starting process rollout_proc0 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc1 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc2 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc3 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc4 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc5 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc6 [2023-07-04 15:26:57,885][17091] Starting process rollout_proc7 [2023-07-04 15:27:07,723][17772] Worker 7 uses CPU cores [1] [2023-07-04 15:27:07,994][17752] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:27:07,995][17752] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-07-04 15:27:08,047][17769] Worker 1 uses CPU cores [1] [2023-07-04 15:27:08,077][17752] Num visible devices: 1 [2023-07-04 15:27:08,099][17752] Starting seed is not provided [2023-07-04 15:27:08,100][17752] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:27:08,103][17752] Initializing actor-critic model on device cuda:0 [2023-07-04 15:27:08,103][17752] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:27:08,104][17752] RunningMeanStd input shape: (1,) [2023-07-04 15:27:08,245][17752] ConvEncoder: input_channels=3 [2023-07-04 15:27:08,494][17765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:27:08,496][17771] Worker 6 uses CPU cores [0] [2023-07-04 15:27:08,497][17765] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-07-04 15:27:08,553][17765] Num visible devices: 1 [2023-07-04 15:27:08,605][17768] Worker 3 uses CPU cores [1] [2023-07-04 15:27:08,639][17766] Worker 0 uses CPU cores [0] [2023-07-04 15:27:08,641][17770] Worker 4 uses CPU cores [0] [2023-07-04 15:27:08,647][17767] Worker 2 uses CPU cores [0] [2023-07-04 15:27:08,690][17773] Worker 5 uses CPU cores [1] [2023-07-04 15:27:08,762][17752] Conv encoder output size: 512 [2023-07-04 15:27:08,763][17752] Policy head output size: 512 [2023-07-04 15:27:08,787][17752] Created Actor Critic model with architecture: [2023-07-04 15:27:08,788][17752] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-07-04 15:27:10,523][17752] Using optimizer [2023-07-04 15:27:10,525][17752] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000230_942080.pth... [2023-07-04 15:27:10,566][17752] Loading model from checkpoint [2023-07-04 15:27:10,572][17752] Loaded experiment state at self.train_step=230, self.env_steps=942080 [2023-07-04 15:27:10,573][17752] Initialized policy 0 weights for model version 230 [2023-07-04 15:27:10,583][17752] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:27:10,590][17752] LearnerWorker_p0 finished initialization! [2023-07-04 15:27:10,791][17765] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:27:10,792][17765] RunningMeanStd input shape: (1,) [2023-07-04 15:27:10,824][17765] ConvEncoder: input_channels=3 [2023-07-04 15:27:10,989][17765] Conv encoder output size: 512 [2023-07-04 15:27:10,990][17765] Policy head output size: 512 [2023-07-04 15:27:11,061][17091] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 942080. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:27:12,899][17091] Inference worker 0-0 is ready! [2023-07-04 15:27:12,902][17091] All inference workers are ready! Signal rollout workers to start! [2023-07-04 15:27:13,041][17768] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,039][17773] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,051][17769] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,054][17772] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,107][17767] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,096][17766] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,116][17770] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:13,121][17771] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:27:14,009][17767] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,012][17766] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,426][17770] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,724][17773] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,748][17772] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,754][17768] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,751][17769] Decorrelating experience for 0 frames... [2023-07-04 15:27:14,926][17770] Decorrelating experience for 32 frames... [2023-07-04 15:27:16,047][17771] Decorrelating experience for 0 frames... [2023-07-04 15:27:16,061][17091] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 942080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:27:16,087][17767] Decorrelating experience for 32 frames... [2023-07-04 15:27:16,085][17766] Decorrelating experience for 32 frames... [2023-07-04 15:27:16,290][17769] Decorrelating experience for 32 frames... [2023-07-04 15:27:16,299][17768] Decorrelating experience for 32 frames... [2023-07-04 15:27:16,302][17772] Decorrelating experience for 32 frames... [2023-07-04 15:27:16,609][17773] Decorrelating experience for 32 frames... [2023-07-04 15:27:17,269][17771] Decorrelating experience for 32 frames... [2023-07-04 15:27:17,477][17772] Decorrelating experience for 64 frames... [2023-07-04 15:27:17,481][17768] Decorrelating experience for 64 frames... [2023-07-04 15:27:17,490][17767] Decorrelating experience for 64 frames... [2023-07-04 15:27:17,503][17766] Decorrelating experience for 64 frames... [2023-07-04 15:27:17,785][17091] Heartbeat connected on Batcher_0 [2023-07-04 15:27:17,790][17091] Heartbeat connected on LearnerWorker_p0 [2023-07-04 15:27:17,838][17091] Heartbeat connected on InferenceWorker_p0-w0 [2023-07-04 15:27:18,404][17771] Decorrelating experience for 64 frames... [2023-07-04 15:27:18,423][17773] Decorrelating experience for 64 frames... [2023-07-04 15:27:18,428][17769] Decorrelating experience for 64 frames... [2023-07-04 15:27:18,458][17767] Decorrelating experience for 96 frames... [2023-07-04 15:27:18,669][17091] Heartbeat connected on RolloutWorker_w2 [2023-07-04 15:27:19,771][17772] Decorrelating experience for 96 frames... [2023-07-04 15:27:19,776][17768] Decorrelating experience for 96 frames... [2023-07-04 15:27:19,988][17769] Decorrelating experience for 96 frames... [2023-07-04 15:27:20,168][17091] Heartbeat connected on RolloutWorker_w7 [2023-07-04 15:27:20,206][17091] Heartbeat connected on RolloutWorker_w3 [2023-07-04 15:27:20,580][17091] Heartbeat connected on RolloutWorker_w1 [2023-07-04 15:27:21,061][17091] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 942080. Throughput: 0: 1.2. Samples: 12. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:27:21,063][17091] Avg episode reward: [(0, '0.640')] [2023-07-04 15:27:21,111][17770] Decorrelating experience for 64 frames... [2023-07-04 15:27:21,245][17771] Decorrelating experience for 96 frames... [2023-07-04 15:27:21,585][17091] Heartbeat connected on RolloutWorker_w6 [2023-07-04 15:27:22,697][17773] Decorrelating experience for 96 frames... [2023-07-04 15:27:23,487][17091] Heartbeat connected on RolloutWorker_w5 [2023-07-04 15:27:24,202][17752] Signal inference workers to stop experience collection... [2023-07-04 15:27:24,224][17765] InferenceWorker_p0-w0: stopping experience collection [2023-07-04 15:27:24,569][17770] Decorrelating experience for 96 frames... [2023-07-04 15:27:24,775][17091] Heartbeat connected on RolloutWorker_w4 [2023-07-04 15:27:24,918][17766] Decorrelating experience for 96 frames... [2023-07-04 15:27:25,019][17752] Signal inference workers to resume experience collection... [2023-07-04 15:27:25,020][17765] InferenceWorker_p0-w0: resuming experience collection [2023-07-04 15:27:25,290][17091] Heartbeat connected on RolloutWorker_w0 [2023-07-04 15:27:26,061][17091] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 946176. Throughput: 0: 147.1. Samples: 2206. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2023-07-04 15:27:26,066][17091] Avg episode reward: [(0, '3.416')] [2023-07-04 15:27:31,061][17091] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 962560. Throughput: 0: 273.1. Samples: 5462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:27:31,068][17091] Avg episode reward: [(0, '6.261')] [2023-07-04 15:27:36,061][17091] Fps is (10 sec: 2867.2, 60 sec: 1310.7, 300 sec: 1310.7). Total num frames: 974848. Throughput: 0: 303.0. Samples: 7576. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:27:36,071][17091] Avg episode reward: [(0, '10.258')] [2023-07-04 15:27:37,196][17765] Updated weights for policy 0, policy_version 240 (0.0036) [2023-07-04 15:27:41,061][17091] Fps is (10 sec: 3686.4, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 999424. Throughput: 0: 460.5. Samples: 13816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:27:41,069][17091] Avg episode reward: [(0, '10.577')] [2023-07-04 15:27:46,061][17091] Fps is (10 sec: 4096.0, 60 sec: 2106.5, 300 sec: 2106.5). Total num frames: 1015808. Throughput: 0: 558.4. Samples: 19544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:27:46,063][17091] Avg episode reward: [(0, '11.373')] [2023-07-04 15:27:47,738][17765] Updated weights for policy 0, policy_version 250 (0.0015) [2023-07-04 15:27:51,061][17091] Fps is (10 sec: 3276.8, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 1032192. Throughput: 0: 541.8. Samples: 21670. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:27:51,063][17091] Avg episode reward: [(0, '12.366')] [2023-07-04 15:27:51,071][17752] Saving new best policy, reward=12.366! [2023-07-04 15:27:56,061][17091] Fps is (10 sec: 2867.2, 60 sec: 2275.6, 300 sec: 2275.6). Total num frames: 1044480. Throughput: 0: 577.3. Samples: 25978. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:27:56,068][17091] Avg episode reward: [(0, '11.984')] [2023-07-04 15:28:01,063][17091] Fps is (10 sec: 2866.5, 60 sec: 2375.6, 300 sec: 2375.6). Total num frames: 1060864. Throughput: 0: 687.3. Samples: 30928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:28:01,070][17091] Avg episode reward: [(0, '12.085')] [2023-07-04 15:28:01,385][17765] Updated weights for policy 0, policy_version 260 (0.0013) [2023-07-04 15:28:06,061][17091] Fps is (10 sec: 3686.4, 60 sec: 2532.1, 300 sec: 2532.1). Total num frames: 1081344. Throughput: 0: 758.7. Samples: 34154. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:28:06,063][17091] Avg episode reward: [(0, '12.578')] [2023-07-04 15:28:06,071][17752] Saving new best policy, reward=12.578! [2023-07-04 15:28:11,061][17091] Fps is (10 sec: 3277.5, 60 sec: 2525.9, 300 sec: 2525.9). Total num frames: 1093632. Throughput: 0: 790.4. Samples: 37776. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:28:11,063][17091] Avg episode reward: [(0, '12.210')] [2023-07-04 15:28:14,024][17091] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 17091], exiting... [2023-07-04 15:28:14,026][17091] Runner profile tree view: main_loop: 76.2025 [2023-07-04 15:28:14,028][17752] Stopping Batcher_0... [2023-07-04 15:28:14,030][17752] Loop batcher_evt_loop terminating... [2023-07-04 15:28:14,028][17091] Collected {0: 1101824}, FPS: 2096.3 [2023-07-04 15:28:14,030][17752] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000269_1101824.pth... [2023-07-04 15:28:14,048][17768] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance3'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 117, in step obs, info["reset_info"] = self.env.reset() File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 462, in reset obs, info = self.env.reset(seed=seed, options=options) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 82, in reset obs, info = self.env.reset(**kwargs) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 414, in reset return self.env.reset(seed=seed, options=options) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 346, in reset self.game.new_episode() vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,144][17768] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc3_evt_loop [2023-07-04 15:28:14,156][17765] Weights refcount: 2 0 [2023-07-04 15:28:14,086][17773] EvtLoop [rollout_proc5_evt_loop, process=rollout_proc5] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance5'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,176][17765] Stopping InferenceWorker_p0-w0... [2023-07-04 15:28:14,178][17765] Loop inference_proc0-0_evt_loop terminating... [2023-07-04 15:28:14,179][17767] EvtLoop [rollout_proc2_evt_loop, process=rollout_proc2] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance2'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,164][17773] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc5_evt_loop [2023-07-04 15:28:14,107][17769] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance1'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,214][17769] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc1_evt_loop [2023-07-04 15:28:14,203][17770] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance4'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,215][17770] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc4_evt_loop [2023-07-04 15:28:14,146][17772] EvtLoop [rollout_proc7_evt_loop, process=rollout_proc7] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance7'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,220][17772] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc7_evt_loop [2023-07-04 15:28:14,208][17766] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance0'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,226][17766] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc0_evt_loop [2023-07-04 15:28:14,181][17767] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc2_evt_loop [2023-07-04 15:28:14,352][17771] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance6'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 469, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 408, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2023-07-04 15:28:14,354][17771] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc6_evt_loop [2023-07-04 15:28:14,472][17752] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000178_729088.pth [2023-07-04 15:28:14,514][17752] Stopping LearnerWorker_p0... [2023-07-04 15:28:14,523][17752] Loop learner_proc0_evt_loop terminating... [2023-07-04 15:31:19,830][18333] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-07-04 15:31:19,837][18333] Rollout worker 0 uses device cpu [2023-07-04 15:31:19,840][18333] Rollout worker 1 uses device cpu [2023-07-04 15:31:19,841][18333] Rollout worker 2 uses device cpu [2023-07-04 15:31:19,847][18333] Rollout worker 3 uses device cpu [2023-07-04 15:31:19,848][18333] Rollout worker 4 uses device cpu [2023-07-04 15:31:19,851][18333] Rollout worker 5 uses device cpu [2023-07-04 15:31:19,852][18333] Rollout worker 6 uses device cpu [2023-07-04 15:31:19,854][18333] Rollout worker 7 uses device cpu [2023-07-04 15:31:19,960][18333] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:31:19,961][18333] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:31:19,995][18333] Starting all processes... [2023-07-04 15:31:19,996][18333] Starting process learner_proc0 [2023-07-04 15:31:20,045][18333] Starting all processes... [2023-07-04 15:31:20,054][18333] Starting process inference_proc0-0 [2023-07-04 15:31:20,054][18333] Starting process rollout_proc0 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc1 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc2 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc3 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc4 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc5 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc6 [2023-07-04 15:31:20,058][18333] Starting process rollout_proc7 [2023-07-04 15:31:32,774][19223] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:31:32,775][19223] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-07-04 15:31:32,801][19226] Worker 1 uses CPU cores [1] [2023-07-04 15:31:32,812][19223] Num visible devices: 1 [2023-07-04 15:31:32,868][19230] Worker 6 uses CPU cores [0] [2023-07-04 15:31:32,987][19210] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:31:32,987][19210] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-07-04 15:31:32,996][19225] Worker 2 uses CPU cores [0] [2023-07-04 15:31:33,026][19210] Num visible devices: 1 [2023-07-04 15:31:33,069][19210] Starting seed is not provided [2023-07-04 15:31:33,070][19210] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:31:33,070][19210] Initializing actor-critic model on device cuda:0 [2023-07-04 15:31:33,071][19210] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:31:33,073][19210] RunningMeanStd input shape: (1,) [2023-07-04 15:31:33,091][19224] Worker 0 uses CPU cores [0] [2023-07-04 15:31:33,115][19229] Worker 5 uses CPU cores [1] [2023-07-04 15:31:33,120][19210] ConvEncoder: input_channels=3 [2023-07-04 15:31:33,153][19228] Worker 4 uses CPU cores [0] [2023-07-04 15:31:33,159][19227] Worker 3 uses CPU cores [1] [2023-07-04 15:31:33,216][19231] Worker 7 uses CPU cores [1] [2023-07-04 15:31:33,286][19210] Conv encoder output size: 512 [2023-07-04 15:31:33,286][19210] Policy head output size: 512 [2023-07-04 15:31:33,301][19210] Created Actor Critic model with architecture: [2023-07-04 15:31:33,302][19210] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-07-04 15:31:34,700][19210] Using optimizer [2023-07-04 15:31:34,702][19210] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000269_1101824.pth... [2023-07-04 15:31:34,732][19210] Loading model from checkpoint [2023-07-04 15:31:34,737][19210] Loaded experiment state at self.train_step=269, self.env_steps=1101824 [2023-07-04 15:31:34,737][19210] Initialized policy 0 weights for model version 269 [2023-07-04 15:31:34,740][19210] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:31:34,750][19210] LearnerWorker_p0 finished initialization! [2023-07-04 15:31:34,940][19223] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:31:34,941][19223] RunningMeanStd input shape: (1,) [2023-07-04 15:31:34,955][19223] ConvEncoder: input_channels=3 [2023-07-04 15:31:35,054][19223] Conv encoder output size: 512 [2023-07-04 15:31:35,054][19223] Policy head output size: 512 [2023-07-04 15:31:36,242][18333] Inference worker 0-0 is ready! [2023-07-04 15:31:36,245][18333] All inference workers are ready! Signal rollout workers to start! [2023-07-04 15:31:36,341][19228] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,344][19230] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,345][19225] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,347][19224] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,342][19226] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,347][19229] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,352][19227] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:36,349][19231] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:31:37,151][18333] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 1101824. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:31:37,664][19226] Decorrelating experience for 0 frames... [2023-07-04 15:31:37,666][19227] Decorrelating experience for 0 frames... [2023-07-04 15:31:37,671][19231] Decorrelating experience for 0 frames... [2023-07-04 15:31:37,680][19228] Decorrelating experience for 0 frames... [2023-07-04 15:31:37,683][19225] Decorrelating experience for 0 frames... [2023-07-04 15:31:37,689][19230] Decorrelating experience for 0 frames... [2023-07-04 15:31:38,847][19231] Decorrelating experience for 32 frames... [2023-07-04 15:31:38,854][19227] Decorrelating experience for 32 frames... [2023-07-04 15:31:38,865][19224] Decorrelating experience for 0 frames... [2023-07-04 15:31:38,870][19225] Decorrelating experience for 32 frames... [2023-07-04 15:31:38,880][19230] Decorrelating experience for 32 frames... [2023-07-04 15:31:38,940][19226] Decorrelating experience for 32 frames... [2023-07-04 15:31:39,953][18333] Heartbeat connected on Batcher_0 [2023-07-04 15:31:39,959][18333] Heartbeat connected on LearnerWorker_p0 [2023-07-04 15:31:39,993][18333] Heartbeat connected on InferenceWorker_p0-w0 [2023-07-04 15:31:40,176][19228] Decorrelating experience for 32 frames... [2023-07-04 15:31:40,218][19229] Decorrelating experience for 0 frames... [2023-07-04 15:31:40,247][19224] Decorrelating experience for 32 frames... [2023-07-04 15:31:40,345][19231] Decorrelating experience for 64 frames... [2023-07-04 15:31:40,356][19227] Decorrelating experience for 64 frames... [2023-07-04 15:31:40,422][19230] Decorrelating experience for 64 frames... [2023-07-04 15:31:41,690][19229] Decorrelating experience for 32 frames... [2023-07-04 15:31:41,772][19225] Decorrelating experience for 64 frames... [2023-07-04 15:31:41,770][19226] Decorrelating experience for 64 frames... [2023-07-04 15:31:41,998][19228] Decorrelating experience for 64 frames... [2023-07-04 15:31:42,029][19231] Decorrelating experience for 96 frames... [2023-07-04 15:31:42,036][19224] Decorrelating experience for 64 frames... [2023-07-04 15:31:42,151][18333] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1101824. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:31:42,258][18333] Heartbeat connected on RolloutWorker_w7 [2023-07-04 15:31:43,968][19230] Decorrelating experience for 96 frames... [2023-07-04 15:31:44,168][19225] Decorrelating experience for 96 frames... [2023-07-04 15:31:44,307][18333] Heartbeat connected on RolloutWorker_w6 [2023-07-04 15:31:44,411][19228] Decorrelating experience for 96 frames... [2023-07-04 15:31:44,453][18333] Heartbeat connected on RolloutWorker_w2 [2023-07-04 15:31:44,878][18333] Heartbeat connected on RolloutWorker_w4 [2023-07-04 15:31:45,100][19226] Decorrelating experience for 96 frames... [2023-07-04 15:31:45,331][19227] Decorrelating experience for 96 frames... [2023-07-04 15:31:45,724][18333] Heartbeat connected on RolloutWorker_w1 [2023-07-04 15:31:45,940][18333] Heartbeat connected on RolloutWorker_w3 [2023-07-04 15:31:47,151][18333] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1101824. Throughput: 0: 54.0. Samples: 540. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:31:47,157][18333] Avg episode reward: [(0, '3.234')] [2023-07-04 15:31:49,734][19224] Decorrelating experience for 96 frames... [2023-07-04 15:31:50,260][19210] Signal inference workers to stop experience collection... [2023-07-04 15:31:50,298][19223] InferenceWorker_p0-w0: stopping experience collection [2023-07-04 15:31:50,371][18333] Heartbeat connected on RolloutWorker_w0 [2023-07-04 15:31:50,407][19229] Decorrelating experience for 64 frames... [2023-07-04 15:31:50,442][19210] Signal inference workers to resume experience collection... [2023-07-04 15:31:50,444][19223] InferenceWorker_p0-w0: resuming experience collection [2023-07-04 15:31:52,151][18333] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 1105920. Throughput: 0: 187.9. Samples: 2818. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2023-07-04 15:31:52,157][18333] Avg episode reward: [(0, '4.641')] [2023-07-04 15:31:53,352][19229] Decorrelating experience for 96 frames... [2023-07-04 15:31:53,807][18333] Heartbeat connected on RolloutWorker_w5 [2023-07-04 15:31:57,151][18333] Fps is (10 sec: 2457.6, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 1126400. Throughput: 0: 210.9. Samples: 4218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:31:57,154][18333] Avg episode reward: [(0, '6.976')] [2023-07-04 15:32:00,231][19223] Updated weights for policy 0, policy_version 279 (0.0013) [2023-07-04 15:32:02,151][18333] Fps is (10 sec: 4505.7, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 1150976. Throughput: 0: 431.2. Samples: 10780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:32:02,162][18333] Avg episode reward: [(0, '9.265')] [2023-07-04 15:32:07,154][18333] Fps is (10 sec: 4095.1, 60 sec: 2184.4, 300 sec: 2184.4). Total num frames: 1167360. Throughput: 0: 552.8. Samples: 16584. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:32:07,158][18333] Avg episode reward: [(0, '10.576')] [2023-07-04 15:32:12,151][18333] Fps is (10 sec: 2867.2, 60 sec: 2223.5, 300 sec: 2223.5). Total num frames: 1179648. Throughput: 0: 537.8. Samples: 18822. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:32:12,156][18333] Avg episode reward: [(0, '11.995')] [2023-07-04 15:32:12,759][19223] Updated weights for policy 0, policy_version 289 (0.0017) [2023-07-04 15:32:17,151][18333] Fps is (10 sec: 2867.8, 60 sec: 2355.2, 300 sec: 2355.2). Total num frames: 1196032. Throughput: 0: 573.0. Samples: 22922. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:32:17,159][18333] Avg episode reward: [(0, '12.654')] [2023-07-04 15:32:17,169][19210] Saving new best policy, reward=12.654! [2023-07-04 15:32:22,151][18333] Fps is (10 sec: 3686.4, 60 sec: 2548.6, 300 sec: 2548.6). Total num frames: 1216512. Throughput: 0: 643.2. Samples: 28944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:32:22,154][18333] Avg episode reward: [(0, '13.320')] [2023-07-04 15:32:22,162][19210] Saving new best policy, reward=13.320! [2023-07-04 15:32:23,450][19223] Updated weights for policy 0, policy_version 299 (0.0015) [2023-07-04 15:32:27,152][18333] Fps is (10 sec: 4095.6, 60 sec: 2703.3, 300 sec: 2703.3). Total num frames: 1236992. Throughput: 0: 716.5. Samples: 32242. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:32:27,156][18333] Avg episode reward: [(0, '13.831')] [2023-07-04 15:32:27,169][19210] Saving new best policy, reward=13.831! [2023-07-04 15:32:32,151][18333] Fps is (10 sec: 3686.4, 60 sec: 2755.5, 300 sec: 2755.5). Total num frames: 1253376. Throughput: 0: 815.7. Samples: 37246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:32:32,156][18333] Avg episode reward: [(0, '13.254')] [2023-07-04 15:32:36,204][19223] Updated weights for policy 0, policy_version 309 (0.0013) [2023-07-04 15:32:37,151][18333] Fps is (10 sec: 2867.4, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 1265664. Throughput: 0: 859.3. Samples: 41488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:32:37,156][18333] Avg episode reward: [(0, '13.491')] [2023-07-04 15:32:42,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 2835.7). Total num frames: 1286144. Throughput: 0: 879.4. Samples: 43792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:32:42,154][18333] Avg episode reward: [(0, '13.411')] [2023-07-04 15:32:46,338][19223] Updated weights for policy 0, policy_version 319 (0.0015) [2023-07-04 15:32:47,151][18333] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 2925.7). Total num frames: 1306624. Throughput: 0: 885.5. Samples: 50626. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:32:47,160][18333] Avg episode reward: [(0, '15.000')] [2023-07-04 15:32:47,175][19210] Saving new best policy, reward=15.000! [2023-07-04 15:32:52,151][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 2949.1). Total num frames: 1323008. Throughput: 0: 878.5. Samples: 56116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:32:52,158][18333] Avg episode reward: [(0, '15.838')] [2023-07-04 15:32:52,162][19210] Saving new best policy, reward=15.838! [2023-07-04 15:32:57,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 2969.6). Total num frames: 1339392. Throughput: 0: 874.5. Samples: 58176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:32:57,157][18333] Avg episode reward: [(0, '17.057')] [2023-07-04 15:32:57,171][19210] Saving new best policy, reward=17.057! [2023-07-04 15:32:59,684][19223] Updated weights for policy 0, policy_version 329 (0.0018) [2023-07-04 15:33:02,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2987.7). Total num frames: 1355776. Throughput: 0: 877.6. Samples: 62414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:33:02,153][18333] Avg episode reward: [(0, '17.002')] [2023-07-04 15:33:07,151][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3049.2). Total num frames: 1376256. Throughput: 0: 886.1. Samples: 68818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:33:07,156][18333] Avg episode reward: [(0, '17.251')] [2023-07-04 15:33:07,164][19210] Saving new best policy, reward=17.251! [2023-07-04 15:33:09,382][19223] Updated weights for policy 0, policy_version 339 (0.0020) [2023-07-04 15:33:12,151][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3104.3). Total num frames: 1396736. Throughput: 0: 886.5. Samples: 72134. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:33:12,154][18333] Avg episode reward: [(0, '16.180')] [2023-07-04 15:33:17,151][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3113.0). Total num frames: 1413120. Throughput: 0: 885.9. Samples: 77112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:33:17,154][18333] Avg episode reward: [(0, '16.009')] [2023-07-04 15:33:17,167][19210] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000345_1413120.pth... [2023-07-04 15:33:17,357][19210] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000230_942080.pth [2023-07-04 15:33:22,151][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3081.8). Total num frames: 1425408. Throughput: 0: 885.7. Samples: 81346. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:33:22,154][18333] Avg episode reward: [(0, '14.982')] [2023-07-04 15:33:22,901][19223] Updated weights for policy 0, policy_version 349 (0.0027) [2023-07-04 15:33:27,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3127.9). Total num frames: 1445888. Throughput: 0: 888.8. Samples: 83788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:33:27,154][18333] Avg episode reward: [(0, '14.305')] [2023-07-04 15:33:32,151][18333] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3169.9). Total num frames: 1466368. Throughput: 0: 885.6. Samples: 90476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:33:32,154][18333] Avg episode reward: [(0, '14.218')] [2023-07-04 15:33:32,349][19223] Updated weights for policy 0, policy_version 359 (0.0012) [2023-07-04 15:33:37,151][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3174.4). Total num frames: 1482752. Throughput: 0: 885.8. Samples: 95978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:33:37,159][18333] Avg episode reward: [(0, '15.433')] [2023-07-04 15:33:42,152][18333] Fps is (10 sec: 3276.5, 60 sec: 3549.8, 300 sec: 3178.5). Total num frames: 1499136. Throughput: 0: 887.4. Samples: 98110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:33:42,159][18333] Avg episode reward: [(0, '16.282')] [2023-07-04 15:33:45,776][19223] Updated weights for policy 0, policy_version 369 (0.0025) [2023-07-04 15:33:47,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3182.3). Total num frames: 1515520. Throughput: 0: 890.3. Samples: 102478. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:33:47,158][18333] Avg episode reward: [(0, '17.297')] [2023-07-04 15:33:47,169][19210] Saving new best policy, reward=17.297! [2023-07-04 15:33:52,151][18333] Fps is (10 sec: 3686.7, 60 sec: 3549.9, 300 sec: 3216.1). Total num frames: 1536000. Throughput: 0: 893.0. Samples: 109004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:33:52,157][18333] Avg episode reward: [(0, '17.762')] [2023-07-04 15:33:52,244][19210] Saving new best policy, reward=17.762! [2023-07-04 15:33:55,157][19223] Updated weights for policy 0, policy_version 379 (0.0013) [2023-07-04 15:33:57,152][18333] Fps is (10 sec: 4095.6, 60 sec: 3618.1, 300 sec: 3247.5). Total num frames: 1556480. Throughput: 0: 892.1. Samples: 112280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:33:57,160][18333] Avg episode reward: [(0, '19.453')] [2023-07-04 15:33:57,170][19210] Saving new best policy, reward=19.453! [2023-07-04 15:34:02,161][18333] Fps is (10 sec: 3682.9, 60 sec: 3617.6, 300 sec: 3248.3). Total num frames: 1572864. Throughput: 0: 890.4. Samples: 117188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:34:02,165][18333] Avg episode reward: [(0, '18.770')] [2023-07-04 15:34:07,151][18333] Fps is (10 sec: 2867.5, 60 sec: 3481.6, 300 sec: 3222.2). Total num frames: 1585152. Throughput: 0: 892.2. Samples: 121494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:34:07,154][18333] Avg episode reward: [(0, '18.758')] [2023-07-04 15:34:08,705][19223] Updated weights for policy 0, policy_version 389 (0.0037) [2023-07-04 15:34:12,151][18333] Fps is (10 sec: 3279.9, 60 sec: 3481.6, 300 sec: 3250.4). Total num frames: 1605632. Throughput: 0: 892.1. Samples: 123934. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:34:12,157][18333] Avg episode reward: [(0, '20.315')] [2023-07-04 15:34:12,161][19210] Saving new best policy, reward=20.315! [2023-07-04 15:34:17,151][18333] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3302.4). Total num frames: 1630208. Throughput: 0: 891.1. Samples: 130574. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:34:17,157][18333] Avg episode reward: [(0, '20.786')] [2023-07-04 15:34:17,166][19210] Saving new best policy, reward=20.786! [2023-07-04 15:34:18,081][19223] Updated weights for policy 0, policy_version 399 (0.0016) [2023-07-04 15:34:22,154][18333] Fps is (10 sec: 3685.5, 60 sec: 3618.0, 300 sec: 3276.8). Total num frames: 1642496. Throughput: 0: 888.6. Samples: 135966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:34:22,158][18333] Avg episode reward: [(0, '19.671')] [2023-07-04 15:34:27,152][18333] Fps is (10 sec: 2867.1, 60 sec: 3549.8, 300 sec: 3276.8). Total num frames: 1658880. Throughput: 0: 887.9. Samples: 138064. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:34:27,156][18333] Avg episode reward: [(0, '19.358')] [2023-07-04 15:34:31,886][19223] Updated weights for policy 0, policy_version 409 (0.0022) [2023-07-04 15:34:32,151][18333] Fps is (10 sec: 3277.6, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 1675264. Throughput: 0: 884.5. Samples: 142280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:34:32,153][18333] Avg episode reward: [(0, '20.845')] [2023-07-04 15:34:32,160][19210] Saving new best policy, reward=20.845! [2023-07-04 15:34:37,151][18333] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3299.6). Total num frames: 1695744. Throughput: 0: 880.4. Samples: 148622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:34:37,154][18333] Avg episode reward: [(0, '21.184')] [2023-07-04 15:34:37,163][19210] Saving new best policy, reward=21.184! [2023-07-04 15:34:41,344][19223] Updated weights for policy 0, policy_version 419 (0.0015) [2023-07-04 15:34:42,151][18333] Fps is (10 sec: 4095.9, 60 sec: 3618.2, 300 sec: 3321.1). Total num frames: 1716224. Throughput: 0: 880.1. Samples: 151886. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:34:42,156][18333] Avg episode reward: [(0, '19.990')] [2023-07-04 15:34:47,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3298.4). Total num frames: 1728512. Throughput: 0: 881.8. Samples: 156860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:34:47,156][18333] Avg episode reward: [(0, '19.731')] [2023-07-04 15:34:52,151][18333] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3297.8). Total num frames: 1744896. Throughput: 0: 879.2. Samples: 161058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:34:52,155][18333] Avg episode reward: [(0, '19.473')] [2023-07-04 15:34:55,024][19223] Updated weights for policy 0, policy_version 429 (0.0022) [2023-07-04 15:34:57,151][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3317.8). Total num frames: 1765376. Throughput: 0: 881.4. Samples: 163598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:34:57,154][18333] Avg episode reward: [(0, '18.186')] [2023-07-04 15:35:02,151][18333] Fps is (10 sec: 4096.0, 60 sec: 3550.4, 300 sec: 3336.7). Total num frames: 1785856. Throughput: 0: 886.3. Samples: 170458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:35:02,155][18333] Avg episode reward: [(0, '19.798')] [2023-07-04 15:35:04,376][19223] Updated weights for policy 0, policy_version 439 (0.0011) [2023-07-04 15:35:07,151][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3335.3). Total num frames: 1802240. Throughput: 0: 887.1. Samples: 175882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:35:07,158][18333] Avg episode reward: [(0, '19.982')] [2023-07-04 15:35:12,152][18333] Fps is (10 sec: 3276.6, 60 sec: 3549.8, 300 sec: 3333.9). Total num frames: 1818624. Throughput: 0: 886.2. Samples: 177944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:35:12,157][18333] Avg episode reward: [(0, '20.642')] [2023-07-04 15:35:17,151][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3332.7). Total num frames: 1835008. Throughput: 0: 890.7. Samples: 182360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:35:17,158][18333] Avg episode reward: [(0, '20.880')] [2023-07-04 15:35:17,168][19210] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000448_1835008.pth... [2023-07-04 15:35:17,293][19210] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000269_1101824.pth [2023-07-04 15:35:17,731][19223] Updated weights for policy 0, policy_version 449 (0.0039) [2023-07-04 15:35:22,151][18333] Fps is (10 sec: 3686.6, 60 sec: 3550.0, 300 sec: 3349.6). Total num frames: 1855488. Throughput: 0: 894.4. Samples: 188868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:35:22,153][18333] Avg episode reward: [(0, '20.876')] [2023-07-04 15:35:27,151][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3365.8). Total num frames: 1875968. Throughput: 0: 894.7. Samples: 192146. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:35:27,154][18333] Avg episode reward: [(0, '21.535')] [2023-07-04 15:35:27,168][19210] Saving new best policy, reward=21.535! [2023-07-04 15:35:27,656][19223] Updated weights for policy 0, policy_version 459 (0.0013) [2023-07-04 15:35:32,153][18333] Fps is (10 sec: 3685.8, 60 sec: 3618.0, 300 sec: 3363.9). Total num frames: 1892352. Throughput: 0: 890.7. Samples: 196944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:35:32,158][18333] Avg episode reward: [(0, '20.905')] [2023-07-04 15:35:37,151][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3345.1). Total num frames: 1904640. Throughput: 0: 891.6. Samples: 201178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:35:37,157][18333] Avg episode reward: [(0, '20.255')] [2023-07-04 15:35:40,552][19223] Updated weights for policy 0, policy_version 469 (0.0012) [2023-07-04 15:35:42,151][18333] Fps is (10 sec: 3277.4, 60 sec: 3481.6, 300 sec: 3360.4). Total num frames: 1925120. Throughput: 0: 894.2. Samples: 203836. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:35:42,154][18333] Avg episode reward: [(0, '19.810')] [2023-07-04 15:35:47,151][18333] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3391.5). Total num frames: 1949696. Throughput: 0: 889.6. Samples: 210490. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:35:47,153][18333] Avg episode reward: [(0, '19.950')] [2023-07-04 15:35:50,738][19223] Updated weights for policy 0, policy_version 479 (0.0012) [2023-07-04 15:35:52,154][18333] Fps is (10 sec: 3685.2, 60 sec: 3617.9, 300 sec: 3373.1). Total num frames: 1961984. Throughput: 0: 889.1. Samples: 215894. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:35:52,158][18333] Avg episode reward: [(0, '19.565')] [2023-07-04 15:35:57,151][18333] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3371.3). Total num frames: 1978368. Throughput: 0: 890.7. Samples: 218024. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:35:57,154][18333] Avg episode reward: [(0, '19.184')] [2023-07-04 15:36:02,151][18333] Fps is (10 sec: 3277.9, 60 sec: 3481.6, 300 sec: 3369.5). Total num frames: 1994752. Throughput: 0: 888.2. Samples: 222330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:36:02,153][18333] Avg episode reward: [(0, '19.339')] [2023-07-04 15:36:03,431][19223] Updated weights for policy 0, policy_version 489 (0.0016) [2023-07-04 15:36:04,365][19210] Stopping Batcher_0... [2023-07-04 15:36:04,365][19210] Loop batcher_evt_loop terminating... [2023-07-04 15:36:04,366][18333] Component Batcher_0 stopped! [2023-07-04 15:36:04,373][19210] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth... [2023-07-04 15:36:04,431][19223] Weights refcount: 2 0 [2023-07-04 15:36:04,438][18333] Component RolloutWorker_w1 stopped! [2023-07-04 15:36:04,438][19226] Stopping RolloutWorker_w1... [2023-07-04 15:36:04,444][18333] Component InferenceWorker_p0-w0 stopped! [2023-07-04 15:36:04,446][19223] Stopping InferenceWorker_p0-w0... [2023-07-04 15:36:04,448][19223] Loop inference_proc0-0_evt_loop terminating... [2023-07-04 15:36:04,455][18333] Component RolloutWorker_w3 stopped! [2023-07-04 15:36:04,449][19226] Loop rollout_proc1_evt_loop terminating... [2023-07-04 15:36:04,459][19227] Stopping RolloutWorker_w3... [2023-07-04 15:36:04,462][18333] Component RolloutWorker_w7 stopped! [2023-07-04 15:36:04,463][19231] Stopping RolloutWorker_w7... [2023-07-04 15:36:04,466][19225] Stopping RolloutWorker_w2... [2023-07-04 15:36:04,466][18333] Component RolloutWorker_w2 stopped! [2023-07-04 15:36:04,460][19227] Loop rollout_proc3_evt_loop terminating... [2023-07-04 15:36:04,473][19228] Stopping RolloutWorker_w4... [2023-07-04 15:36:04,473][18333] Component RolloutWorker_w4 stopped! [2023-07-04 15:36:04,467][19225] Loop rollout_proc2_evt_loop terminating... [2023-07-04 15:36:04,479][18333] Component RolloutWorker_w5 stopped! [2023-07-04 15:36:04,483][19230] Stopping RolloutWorker_w6... [2023-07-04 15:36:04,483][18333] Component RolloutWorker_w0 stopped! [2023-07-04 15:36:04,482][19224] Stopping RolloutWorker_w0... [2023-07-04 15:36:04,489][19224] Loop rollout_proc0_evt_loop terminating... [2023-07-04 15:36:04,486][18333] Component RolloutWorker_w6 stopped! [2023-07-04 15:36:04,482][19229] Stopping RolloutWorker_w5... [2023-07-04 15:36:04,474][19228] Loop rollout_proc4_evt_loop terminating... [2023-07-04 15:36:04,476][19231] Loop rollout_proc7_evt_loop terminating... [2023-07-04 15:36:04,492][19230] Loop rollout_proc6_evt_loop terminating... [2023-07-04 15:36:04,491][19229] Loop rollout_proc5_evt_loop terminating... [2023-07-04 15:36:04,525][19210] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000345_1413120.pth [2023-07-04 15:36:04,534][19210] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth... [2023-07-04 15:36:04,696][18333] Component LearnerWorker_p0 stopped! [2023-07-04 15:36:04,699][18333] Waiting for process learner_proc0 to stop... [2023-07-04 15:36:04,703][19210] Stopping LearnerWorker_p0... [2023-07-04 15:36:04,708][19210] Loop learner_proc0_evt_loop terminating... [2023-07-04 15:36:05,721][18333] Waiting for process inference_proc0-0 to join... [2023-07-04 15:36:05,724][18333] Waiting for process rollout_proc0 to join... [2023-07-04 15:36:07,121][18333] Waiting for process rollout_proc1 to join... [2023-07-04 15:36:07,123][18333] Waiting for process rollout_proc2 to join... [2023-07-04 15:36:07,125][18333] Waiting for process rollout_proc3 to join... [2023-07-04 15:36:07,127][18333] Waiting for process rollout_proc4 to join... [2023-07-04 15:36:07,129][18333] Waiting for process rollout_proc5 to join... [2023-07-04 15:36:07,132][18333] Waiting for process rollout_proc6 to join... [2023-07-04 15:36:07,134][18333] Waiting for process rollout_proc7 to join... [2023-07-04 15:36:07,136][18333] Batcher 0 profile tree view: batching: 5.6426, releasing_batches: 0.0084 [2023-07-04 15:36:07,138][18333] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0026 wait_policy_total: 129.0814 update_model: 1.7474 weight_update: 0.0016 one_step: 0.0023 handle_policy_step: 127.1982 deserialize: 3.4368, stack: 0.6993, obs_to_device_normalize: 27.2551, forward: 63.9315, send_messages: 6.5124 prepare_outputs: 18.9721 to_cpu: 11.6418 [2023-07-04 15:36:07,139][18333] Learner 0 profile tree view: misc: 0.0011, prepare_batch: 7.6910 train: 18.5692 epoch_init: 0.0012, minibatch_init: 0.0014, losses_postprocess: 0.1452, kl_divergence: 0.1067, after_optimizer: 0.8496 calculate_losses: 5.8743 losses_init: 0.0044, forward_head: 0.6278, bptt_initial: 3.5093, tail: 0.3162, advantages_returns: 0.0701, losses: 0.7452 bptt: 0.5252 bptt_forward_core: 0.4969 update: 11.4368 clip: 0.3208 [2023-07-04 15:36:07,141][18333] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.0466, enqueue_policy_requests: 32.3554, env_step: 192.8171, overhead: 5.4725, complete_rollouts: 1.2743 save_policy_outputs: 4.6529 split_output_tensors: 2.2669 [2023-07-04 15:36:07,143][18333] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.1159, enqueue_policy_requests: 37.7936, env_step: 194.1022, overhead: 5.5181, complete_rollouts: 1.8863 save_policy_outputs: 4.2424 split_output_tensors: 2.0435 [2023-07-04 15:36:07,145][18333] Loop Runner_EvtLoop terminating... [2023-07-04 15:36:07,147][18333] Runner profile tree view: main_loop: 287.1521 [2023-07-04 15:36:07,148][18333] Collected {0: 2007040}, FPS: 3152.4 [2023-07-04 15:39:43,841][18333] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-07-04 15:39:43,844][18333] Overriding arg 'num_workers' with value 1 passed from command line [2023-07-04 15:39:43,845][18333] Adding new argument 'no_render'=True that is not in the saved config file! [2023-07-04 15:39:43,848][18333] Adding new argument 'save_video'=True that is not in the saved config file! [2023-07-04 15:39:43,849][18333] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-07-04 15:39:43,851][18333] Adding new argument 'video_name'=None that is not in the saved config file! [2023-07-04 15:39:43,852][18333] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-07-04 15:39:43,853][18333] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-07-04 15:39:43,855][18333] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-07-04 15:39:43,856][18333] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-07-04 15:39:43,857][18333] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-07-04 15:39:43,859][18333] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-07-04 15:39:43,860][18333] Adding new argument 'train_script'=None that is not in the saved config file! [2023-07-04 15:39:43,861][18333] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-07-04 15:39:43,862][18333] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-07-04 15:39:43,885][18333] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:39:43,887][18333] RunningMeanStd input shape: (1,) [2023-07-04 15:39:43,900][18333] ConvEncoder: input_channels=3 [2023-07-04 15:39:43,937][18333] Conv encoder output size: 512 [2023-07-04 15:39:43,938][18333] Policy head output size: 512 [2023-07-04 15:39:43,958][18333] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth... [2023-07-04 15:39:44,444][18333] Num frames 100... [2023-07-04 15:39:44,565][18333] Num frames 200... [2023-07-04 15:39:44,688][18333] Num frames 300... [2023-07-04 15:39:44,823][18333] Num frames 400... [2023-07-04 15:39:44,944][18333] Num frames 500... [2023-07-04 15:39:45,110][18333] Avg episode rewards: #0: 13.900, true rewards: #0: 5.900 [2023-07-04 15:39:45,113][18333] Avg episode reward: 13.900, avg true_objective: 5.900 [2023-07-04 15:39:45,127][18333] Num frames 600... [2023-07-04 15:39:45,249][18333] Num frames 700... [2023-07-04 15:39:45,378][18333] Num frames 800... [2023-07-04 15:39:45,500][18333] Num frames 900... [2023-07-04 15:39:45,640][18333] Num frames 1000... [2023-07-04 15:39:45,773][18333] Num frames 1100... [2023-07-04 15:39:45,903][18333] Num frames 1200... [2023-07-04 15:39:46,090][18333] Avg episode rewards: #0: 14.970, true rewards: #0: 6.470 [2023-07-04 15:39:46,092][18333] Avg episode reward: 14.970, avg true_objective: 6.470 [2023-07-04 15:39:46,102][18333] Num frames 1300... [2023-07-04 15:39:46,233][18333] Num frames 1400... [2023-07-04 15:39:46,352][18333] Num frames 1500... [2023-07-04 15:39:46,473][18333] Num frames 1600... [2023-07-04 15:39:46,600][18333] Num frames 1700... [2023-07-04 15:39:46,718][18333] Num frames 1800... [2023-07-04 15:39:46,845][18333] Num frames 1900... [2023-07-04 15:39:46,993][18333] Num frames 2000... [2023-07-04 15:39:47,160][18333] Avg episode rewards: #0: 15.207, true rewards: #0: 6.873 [2023-07-04 15:39:47,163][18333] Avg episode reward: 15.207, avg true_objective: 6.873 [2023-07-04 15:39:47,229][18333] Num frames 2100... [2023-07-04 15:39:47,406][18333] Num frames 2200... [2023-07-04 15:39:47,583][18333] Num frames 2300... [2023-07-04 15:39:47,768][18333] Num frames 2400... [2023-07-04 15:39:47,947][18333] Num frames 2500... [2023-07-04 15:39:48,122][18333] Num frames 2600... [2023-07-04 15:39:48,305][18333] Num frames 2700... [2023-07-04 15:39:48,395][18333] Avg episode rewards: #0: 14.295, true rewards: #0: 6.795 [2023-07-04 15:39:48,400][18333] Avg episode reward: 14.295, avg true_objective: 6.795 [2023-07-04 15:39:48,550][18333] Num frames 2800... [2023-07-04 15:39:48,732][18333] Num frames 2900... [2023-07-04 15:39:48,919][18333] Num frames 3000... [2023-07-04 15:39:49,095][18333] Num frames 3100... [2023-07-04 15:39:49,282][18333] Num frames 3200... [2023-07-04 15:39:49,466][18333] Num frames 3300... [2023-07-04 15:39:49,632][18333] Avg episode rewards: #0: 13.316, true rewards: #0: 6.716 [2023-07-04 15:39:49,635][18333] Avg episode reward: 13.316, avg true_objective: 6.716 [2023-07-04 15:39:54,827][18333] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-07-04 15:39:54,829][18333] Overriding arg 'num_workers' with value 1 passed from command line [2023-07-04 15:39:54,831][18333] Adding new argument 'no_render'=True that is not in the saved config file! [2023-07-04 15:39:54,833][18333] Adding new argument 'save_video'=True that is not in the saved config file! [2023-07-04 15:39:54,836][18333] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-07-04 15:39:54,837][18333] Adding new argument 'video_name'=None that is not in the saved config file! [2023-07-04 15:39:54,838][18333] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-07-04 15:39:54,839][18333] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-07-04 15:39:54,841][18333] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-07-04 15:39:54,842][18333] Adding new argument 'hf_repository'='HilbertS/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-07-04 15:39:54,844][18333] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-07-04 15:39:54,845][18333] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-07-04 15:39:54,846][18333] Adding new argument 'train_script'=None that is not in the saved config file! [2023-07-04 15:39:54,848][18333] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-07-04 15:39:54,849][18333] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-07-04 15:39:54,874][18333] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:39:54,876][18333] RunningMeanStd input shape: (1,) [2023-07-04 15:39:54,889][18333] ConvEncoder: input_channels=3 [2023-07-04 15:39:54,925][18333] Conv encoder output size: 512 [2023-07-04 15:39:54,926][18333] Policy head output size: 512 [2023-07-04 15:39:54,946][18333] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth... [2023-07-04 15:39:55,426][18333] Num frames 100... [2023-07-04 15:39:55,560][18333] Num frames 200... [2023-07-04 15:39:55,682][18333] Num frames 300... [2023-07-04 15:39:55,807][18333] Num frames 400... [2023-07-04 15:39:55,933][18333] Num frames 500... [2023-07-04 15:39:56,059][18333] Num frames 600... [2023-07-04 15:39:56,187][18333] Num frames 700... [2023-07-04 15:39:56,310][18333] Num frames 800... [2023-07-04 15:39:56,433][18333] Num frames 900... [2023-07-04 15:39:56,560][18333] Num frames 1000... [2023-07-04 15:39:56,699][18333] Num frames 1100... [2023-07-04 15:39:56,825][18333] Num frames 1200... [2023-07-04 15:39:56,944][18333] Num frames 1300... [2023-07-04 15:39:57,086][18333] Num frames 1400... [2023-07-04 15:39:57,206][18333] Num frames 1500... [2023-07-04 15:39:57,331][18333] Num frames 1600... [2023-07-04 15:39:57,453][18333] Num frames 1700... [2023-07-04 15:39:57,580][18333] Num frames 1800... [2023-07-04 15:39:57,710][18333] Num frames 1900... [2023-07-04 15:39:57,832][18333] Num frames 2000... [2023-07-04 15:39:57,968][18333] Num frames 2100... [2023-07-04 15:39:58,021][18333] Avg episode rewards: #0: 54.999, true rewards: #0: 21.000 [2023-07-04 15:39:58,022][18333] Avg episode reward: 54.999, avg true_objective: 21.000 [2023-07-04 15:39:58,151][18333] Num frames 2200... [2023-07-04 15:39:58,278][18333] Num frames 2300... [2023-07-04 15:39:58,398][18333] Num frames 2400... [2023-07-04 15:39:58,521][18333] Num frames 2500... [2023-07-04 15:39:58,644][18333] Num frames 2600... [2023-07-04 15:39:58,774][18333] Num frames 2700... [2023-07-04 15:39:58,895][18333] Num frames 2800... [2023-07-04 15:39:59,019][18333] Num frames 2900... [2023-07-04 15:39:59,115][18333] Avg episode rewards: #0: 36.160, true rewards: #0: 14.660 [2023-07-04 15:39:59,118][18333] Avg episode reward: 36.160, avg true_objective: 14.660 [2023-07-04 15:39:59,212][18333] Num frames 3000... [2023-07-04 15:39:59,335][18333] Num frames 3100... [2023-07-04 15:39:59,462][18333] Num frames 3200... [2023-07-04 15:39:59,584][18333] Num frames 3300... [2023-07-04 15:39:59,729][18333] Num frames 3400... [2023-07-04 15:39:59,851][18333] Num frames 3500... [2023-07-04 15:39:59,977][18333] Num frames 3600... [2023-07-04 15:40:00,067][18333] Avg episode rewards: #0: 29.423, true rewards: #0: 12.090 [2023-07-04 15:40:00,070][18333] Avg episode reward: 29.423, avg true_objective: 12.090 [2023-07-04 15:40:00,166][18333] Num frames 3700... [2023-07-04 15:40:00,287][18333] Num frames 3800... [2023-07-04 15:40:00,415][18333] Num frames 3900... [2023-07-04 15:40:00,536][18333] Num frames 4000... [2023-07-04 15:40:00,663][18333] Num frames 4100... [2023-07-04 15:40:00,792][18333] Num frames 4200... [2023-07-04 15:40:00,911][18333] Num frames 4300... [2023-07-04 15:40:01,031][18333] Num frames 4400... [2023-07-04 15:40:01,167][18333] Num frames 4500... [2023-07-04 15:40:01,330][18333] Avg episode rewards: #0: 26.717, true rewards: #0: 11.468 [2023-07-04 15:40:01,332][18333] Avg episode reward: 26.717, avg true_objective: 11.468 [2023-07-04 15:40:01,351][18333] Num frames 4600... [2023-07-04 15:40:01,473][18333] Num frames 4700... [2023-07-04 15:40:01,603][18333] Num frames 4800... [2023-07-04 15:40:01,776][18333] Num frames 4900... [2023-07-04 15:40:01,948][18333] Num frames 5000... [2023-07-04 15:40:02,131][18333] Num frames 5100... [2023-07-04 15:40:02,313][18333] Num frames 5200... [2023-07-04 15:40:02,496][18333] Num frames 5300... [2023-07-04 15:40:02,677][18333] Num frames 5400... [2023-07-04 15:40:02,862][18333] Avg episode rewards: #0: 24.938, true rewards: #0: 10.938 [2023-07-04 15:40:02,864][18333] Avg episode reward: 24.938, avg true_objective: 10.938 [2023-07-04 15:40:02,924][18333] Num frames 5500... [2023-07-04 15:40:03,105][18333] Num frames 5600... [2023-07-04 15:40:03,301][18333] Num frames 5700... [2023-07-04 15:40:03,482][18333] Num frames 5800... [2023-07-04 15:40:03,663][18333] Num frames 5900... [2023-07-04 15:40:03,869][18333] Avg episode rewards: #0: 22.468, true rewards: #0: 9.968 [2023-07-04 15:40:03,872][18333] Avg episode reward: 22.468, avg true_objective: 9.968 [2023-07-04 15:40:03,911][18333] Num frames 6000... [2023-07-04 15:40:04,092][18333] Num frames 6100... [2023-07-04 15:40:04,278][18333] Num frames 6200... [2023-07-04 15:40:04,458][18333] Num frames 6300... [2023-07-04 15:40:04,646][18333] Num frames 6400... [2023-07-04 15:40:04,831][18333] Num frames 6500... [2023-07-04 15:40:05,009][18333] Num frames 6600... [2023-07-04 15:40:05,199][18333] Num frames 6700... [2023-07-04 15:40:05,394][18333] Num frames 6800... [2023-07-04 15:40:05,581][18333] Num frames 6900... [2023-07-04 15:40:05,765][18333] Num frames 7000... [2023-07-04 15:40:05,949][18333] Num frames 7100... [2023-07-04 15:40:06,132][18333] Num frames 7200... [2023-07-04 15:40:06,318][18333] Num frames 7300... [2023-07-04 15:40:06,506][18333] Num frames 7400... [2023-07-04 15:40:06,657][18333] Avg episode rewards: #0: 24.643, true rewards: #0: 10.643 [2023-07-04 15:40:06,660][18333] Avg episode reward: 24.643, avg true_objective: 10.643 [2023-07-04 15:40:06,753][18333] Num frames 7500... [2023-07-04 15:40:06,928][18333] Num frames 7600... [2023-07-04 15:40:07,049][18333] Num frames 7700... [2023-07-04 15:40:07,188][18333] Avg episode rewards: #0: 22.206, true rewards: #0: 9.706 [2023-07-04 15:40:07,189][18333] Avg episode reward: 22.206, avg true_objective: 9.706 [2023-07-04 15:40:07,237][18333] Num frames 7800... [2023-07-04 15:40:07,364][18333] Num frames 7900... [2023-07-04 15:40:07,497][18333] Num frames 8000... [2023-07-04 15:40:07,622][18333] Num frames 8100... [2023-07-04 15:40:07,757][18333] Num frames 8200... [2023-07-04 15:40:07,880][18333] Num frames 8300... [2023-07-04 15:40:07,949][18333] Avg episode rewards: #0: 20.677, true rewards: #0: 9.232 [2023-07-04 15:40:07,951][18333] Avg episode reward: 20.677, avg true_objective: 9.232 [2023-07-04 15:40:08,063][18333] Num frames 8400... [2023-07-04 15:40:08,189][18333] Num frames 8500... [2023-07-04 15:40:08,316][18333] Num frames 8600... [2023-07-04 15:40:08,453][18333] Num frames 8700... [2023-07-04 15:40:08,578][18333] Num frames 8800... [2023-07-04 15:40:08,709][18333] Avg episode rewards: #0: 19.761, true rewards: #0: 8.861 [2023-07-04 15:40:08,711][18333] Avg episode reward: 19.761, avg true_objective: 8.861 [2023-07-04 15:41:03,224][18333] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-07-04 15:41:06,055][18333] The model has been pushed to https://huggingface.co/HilbertS/rl_course_vizdoom_health_gathering_supreme [2023-07-04 15:41:58,356][18333] Environment doom_basic already registered, overwriting... [2023-07-04 15:41:58,358][18333] Environment doom_two_colors_easy already registered, overwriting... [2023-07-04 15:41:58,360][18333] Environment doom_two_colors_hard already registered, overwriting... [2023-07-04 15:41:58,364][18333] Environment doom_dm already registered, overwriting... [2023-07-04 15:41:58,365][18333] Environment doom_dwango5 already registered, overwriting... [2023-07-04 15:41:58,367][18333] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-07-04 15:41:58,369][18333] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-07-04 15:41:58,370][18333] Environment doom_my_way_home already registered, overwriting... [2023-07-04 15:41:58,371][18333] Environment doom_deadly_corridor already registered, overwriting... [2023-07-04 15:41:58,373][18333] Environment doom_defend_the_center already registered, overwriting... [2023-07-04 15:41:58,375][18333] Environment doom_defend_the_line already registered, overwriting... [2023-07-04 15:41:58,376][18333] Environment doom_health_gathering already registered, overwriting... [2023-07-04 15:41:58,377][18333] Environment doom_health_gathering_supreme already registered, overwriting... [2023-07-04 15:41:58,379][18333] Environment doom_battle already registered, overwriting... [2023-07-04 15:41:58,380][18333] Environment doom_battle2 already registered, overwriting... [2023-07-04 15:41:58,381][18333] Environment doom_duel_bots already registered, overwriting... [2023-07-04 15:41:58,382][18333] Environment doom_deathmatch_bots already registered, overwriting... [2023-07-04 15:41:58,384][18333] Environment doom_duel already registered, overwriting... [2023-07-04 15:41:58,385][18333] Environment doom_deathmatch_full already registered, overwriting... [2023-07-04 15:41:58,386][18333] Environment doom_benchmark already registered, overwriting... [2023-07-04 15:41:58,387][18333] register_encoder_factory: [2023-07-04 15:41:58,425][18333] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-07-04 15:41:58,426][18333] Overriding arg 'train_for_env_steps' with value 4000000 passed from command line [2023-07-04 15:41:58,433][18333] Experiment dir /content/train_dir/default_experiment already exists! [2023-07-04 15:41:58,435][18333] Resuming existing experiment from /content/train_dir/default_experiment... [2023-07-04 15:41:58,437][18333] Weights and Biases integration disabled [2023-07-04 15:41:58,443][18333] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-07-04 15:41:59,883][18333] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository [2023-07-04 15:41:59,887][18333] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-07-04 15:41:59,893][18333] Rollout worker 0 uses device cpu [2023-07-04 15:41:59,896][18333] Rollout worker 1 uses device cpu [2023-07-04 15:41:59,897][18333] Rollout worker 2 uses device cpu [2023-07-04 15:41:59,898][18333] Rollout worker 3 uses device cpu [2023-07-04 15:41:59,899][18333] Rollout worker 4 uses device cpu [2023-07-04 15:41:59,900][18333] Rollout worker 5 uses device cpu [2023-07-04 15:41:59,901][18333] Rollout worker 6 uses device cpu [2023-07-04 15:41:59,903][18333] Rollout worker 7 uses device cpu [2023-07-04 15:42:00,023][18333] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:42:00,029][18333] InferenceWorker_p0-w0: min num requests: 2 [2023-07-04 15:42:00,068][18333] Starting all processes... [2023-07-04 15:42:00,073][18333] Starting process learner_proc0 [2023-07-04 15:42:00,141][18333] Starting all processes... [2023-07-04 15:42:00,150][18333] Starting process inference_proc0-0 [2023-07-04 15:42:00,150][18333] Starting process rollout_proc0 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc1 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc2 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc3 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc4 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc5 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc6 [2023-07-04 15:42:00,151][18333] Starting process rollout_proc7 [2023-07-04 15:42:12,217][22133] Worker 1 uses CPU cores [1] [2023-07-04 15:42:12,582][22113] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:42:12,585][22113] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-07-04 15:42:12,593][22135] Worker 4 uses CPU cores [0] [2023-07-04 15:42:12,595][22126] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:42:12,597][22126] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-07-04 15:42:12,630][22113] Num visible devices: 1 [2023-07-04 15:42:12,657][22126] Num visible devices: 1 [2023-07-04 15:42:12,668][22113] Starting seed is not provided [2023-07-04 15:42:12,668][22113] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:42:12,669][22113] Initializing actor-critic model on device cuda:0 [2023-07-04 15:42:12,670][22113] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:42:12,671][22113] RunningMeanStd input shape: (1,) [2023-07-04 15:42:12,685][22138] Worker 7 uses CPU cores [1] [2023-07-04 15:42:12,686][22132] Worker 3 uses CPU cores [1] [2023-07-04 15:42:12,691][22137] Worker 6 uses CPU cores [0] [2023-07-04 15:42:12,703][22113] ConvEncoder: input_channels=3 [2023-07-04 15:42:12,717][22130] Worker 0 uses CPU cores [0] [2023-07-04 15:42:12,739][22136] Worker 5 uses CPU cores [1] [2023-07-04 15:42:12,742][22134] Worker 2 uses CPU cores [0] [2023-07-04 15:42:12,832][22113] Conv encoder output size: 512 [2023-07-04 15:42:12,833][22113] Policy head output size: 512 [2023-07-04 15:42:12,848][22113] Created Actor Critic model with architecture: [2023-07-04 15:42:12,848][22113] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-07-04 15:42:14,186][22113] Using optimizer [2023-07-04 15:42:14,187][22113] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth... [2023-07-04 15:42:14,218][22113] Loading model from checkpoint [2023-07-04 15:42:14,222][22113] Loaded experiment state at self.train_step=490, self.env_steps=2007040 [2023-07-04 15:42:14,222][22113] Initialized policy 0 weights for model version 490 [2023-07-04 15:42:14,225][22113] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-07-04 15:42:14,238][22113] LearnerWorker_p0 finished initialization! [2023-07-04 15:42:14,414][22126] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:42:14,415][22126] RunningMeanStd input shape: (1,) [2023-07-04 15:42:14,427][22126] ConvEncoder: input_channels=3 [2023-07-04 15:42:14,530][22126] Conv encoder output size: 512 [2023-07-04 15:42:14,530][22126] Policy head output size: 512 [2023-07-04 15:42:15,731][18333] Inference worker 0-0 is ready! [2023-07-04 15:42:15,733][18333] All inference workers are ready! Signal rollout workers to start! [2023-07-04 15:42:15,833][22136] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,834][22138] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,836][22133] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,837][22132] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,829][22134] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,840][22135] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,837][22137] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:15,838][22130] Doom resolution: 160x120, resize resolution: (128, 72) [2023-07-04 15:42:16,364][22134] Decorrelating experience for 0 frames... [2023-07-04 15:42:16,789][22135] Decorrelating experience for 0 frames... [2023-07-04 15:42:17,197][22132] Decorrelating experience for 0 frames... [2023-07-04 15:42:17,199][22133] Decorrelating experience for 0 frames... [2023-07-04 15:42:17,201][22136] Decorrelating experience for 0 frames... [2023-07-04 15:42:17,205][22138] Decorrelating experience for 0 frames... [2023-07-04 15:42:18,447][18333] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 2007040. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:42:18,787][22130] Decorrelating experience for 0 frames... [2023-07-04 15:42:18,793][22136] Decorrelating experience for 32 frames... [2023-07-04 15:42:18,799][22133] Decorrelating experience for 32 frames... [2023-07-04 15:42:18,804][22138] Decorrelating experience for 32 frames... [2023-07-04 15:42:18,819][22134] Decorrelating experience for 32 frames... [2023-07-04 15:42:18,872][22135] Decorrelating experience for 32 frames... [2023-07-04 15:42:20,014][18333] Heartbeat connected on Batcher_0 [2023-07-04 15:42:20,019][18333] Heartbeat connected on LearnerWorker_p0 [2023-07-04 15:42:20,073][18333] Heartbeat connected on InferenceWorker_p0-w0 [2023-07-04 15:42:20,499][22137] Decorrelating experience for 0 frames... [2023-07-04 15:42:20,531][22130] Decorrelating experience for 32 frames... [2023-07-04 15:42:20,759][22135] Decorrelating experience for 64 frames... [2023-07-04 15:42:21,358][22132] Decorrelating experience for 32 frames... [2023-07-04 15:42:21,673][22136] Decorrelating experience for 64 frames... [2023-07-04 15:42:21,685][22133] Decorrelating experience for 64 frames... [2023-07-04 15:42:22,606][22138] Decorrelating experience for 64 frames... [2023-07-04 15:42:22,860][22130] Decorrelating experience for 64 frames... [2023-07-04 15:42:22,874][22137] Decorrelating experience for 32 frames... [2023-07-04 15:42:23,443][18333] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 2007040. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:42:23,493][22136] Decorrelating experience for 96 frames... [2023-07-04 15:42:23,661][22134] Decorrelating experience for 64 frames... [2023-07-04 15:42:23,829][18333] Heartbeat connected on RolloutWorker_w5 [2023-07-04 15:42:24,876][22132] Decorrelating experience for 64 frames... [2023-07-04 15:42:25,138][22138] Decorrelating experience for 96 frames... [2023-07-04 15:42:25,516][22137] Decorrelating experience for 64 frames... [2023-07-04 15:42:25,520][18333] Heartbeat connected on RolloutWorker_w7 [2023-07-04 15:42:26,625][22130] Decorrelating experience for 96 frames... [2023-07-04 15:42:27,023][22135] Decorrelating experience for 96 frames... [2023-07-04 15:42:27,323][18333] Heartbeat connected on RolloutWorker_w0 [2023-07-04 15:42:27,403][22134] Decorrelating experience for 96 frames... [2023-07-04 15:42:27,426][22133] Decorrelating experience for 96 frames... [2023-07-04 15:42:27,727][18333] Heartbeat connected on RolloutWorker_w4 [2023-07-04 15:42:27,884][18333] Heartbeat connected on RolloutWorker_w2 [2023-07-04 15:42:28,014][18333] Heartbeat connected on RolloutWorker_w1 [2023-07-04 15:42:28,444][18333] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 2007040. Throughput: 0: 104.4. Samples: 1044. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-07-04 15:42:28,450][18333] Avg episode reward: [(0, '5.920')] [2023-07-04 15:42:30,105][22137] Decorrelating experience for 96 frames... [2023-07-04 15:42:30,522][22113] Signal inference workers to stop experience collection... [2023-07-04 15:42:30,544][22126] InferenceWorker_p0-w0: stopping experience collection [2023-07-04 15:42:30,576][18333] Heartbeat connected on RolloutWorker_w6 [2023-07-04 15:42:30,706][22132] Decorrelating experience for 96 frames... [2023-07-04 15:42:30,763][18333] Heartbeat connected on RolloutWorker_w3 [2023-07-04 15:42:31,213][22113] Signal inference workers to resume experience collection... [2023-07-04 15:42:31,213][22126] InferenceWorker_p0-w0: resuming experience collection [2023-07-04 15:42:33,444][18333] Fps is (10 sec: 1228.8, 60 sec: 819.4, 300 sec: 819.4). Total num frames: 2019328. Throughput: 0: 161.6. Samples: 2424. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2023-07-04 15:42:33,448][18333] Avg episode reward: [(0, '6.131')] [2023-07-04 15:42:38,444][18333] Fps is (10 sec: 3277.0, 60 sec: 1638.7, 300 sec: 1638.7). Total num frames: 2039808. Throughput: 0: 379.1. Samples: 7580. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:42:38,446][18333] Avg episode reward: [(0, '10.472')] [2023-07-04 15:42:39,885][22126] Updated weights for policy 0, policy_version 500 (0.0364) [2023-07-04 15:42:43,444][18333] Fps is (10 sec: 3686.4, 60 sec: 1966.4, 300 sec: 1966.4). Total num frames: 2056192. Throughput: 0: 513.3. Samples: 12830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:42:43,448][18333] Avg episode reward: [(0, '12.881')] [2023-07-04 15:42:48,444][18333] Fps is (10 sec: 2867.2, 60 sec: 2048.3, 300 sec: 2048.3). Total num frames: 2068480. Throughput: 0: 491.9. Samples: 14756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:42:48,454][18333] Avg episode reward: [(0, '15.054')] [2023-07-04 15:42:53,443][18333] Fps is (10 sec: 2867.2, 60 sec: 2223.8, 300 sec: 2223.8). Total num frames: 2084864. Throughput: 0: 537.4. Samples: 18808. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:42:53,446][18333] Avg episode reward: [(0, '15.502')] [2023-07-04 15:42:54,028][22126] Updated weights for policy 0, policy_version 510 (0.0012) [2023-07-04 15:42:58,444][18333] Fps is (10 sec: 3686.4, 60 sec: 2457.8, 300 sec: 2457.8). Total num frames: 2105344. Throughput: 0: 631.0. Samples: 25236. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:42:58,450][18333] Avg episode reward: [(0, '16.713')] [2023-07-04 15:43:03,444][18333] Fps is (10 sec: 4095.9, 60 sec: 2639.9, 300 sec: 2639.9). Total num frames: 2125824. Throughput: 0: 636.5. Samples: 28642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:43:03,447][18333] Avg episode reward: [(0, '20.677')] [2023-07-04 15:43:03,667][22126] Updated weights for policy 0, policy_version 520 (0.0018) [2023-07-04 15:43:08,444][18333] Fps is (10 sec: 3686.4, 60 sec: 2703.6, 300 sec: 2703.6). Total num frames: 2142208. Throughput: 0: 738.4. Samples: 33228. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:43:08,455][18333] Avg episode reward: [(0, '20.643')] [2023-07-04 15:43:13,444][18333] Fps is (10 sec: 2867.2, 60 sec: 2681.2, 300 sec: 2681.2). Total num frames: 2154496. Throughput: 0: 811.4. Samples: 37556. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:43:13,449][18333] Avg episode reward: [(0, '20.266')] [2023-07-04 15:43:16,601][22126] Updated weights for policy 0, policy_version 530 (0.0027) [2023-07-04 15:43:18,444][18333] Fps is (10 sec: 3276.8, 60 sec: 2799.1, 300 sec: 2799.1). Total num frames: 2174976. Throughput: 0: 844.5. Samples: 40426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:43:18,446][18333] Avg episode reward: [(0, '19.739')] [2023-07-04 15:43:23,449][18333] Fps is (10 sec: 4503.0, 60 sec: 3208.2, 300 sec: 2961.6). Total num frames: 2199552. Throughput: 0: 876.5. Samples: 47026. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:43:23,456][18333] Avg episode reward: [(0, '19.858')] [2023-07-04 15:43:26,881][22126] Updated weights for policy 0, policy_version 540 (0.0017) [2023-07-04 15:43:28,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 2984.4). Total num frames: 2215936. Throughput: 0: 874.7. Samples: 52192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:43:28,447][18333] Avg episode reward: [(0, '18.952')] [2023-07-04 15:43:33,444][18333] Fps is (10 sec: 2868.7, 60 sec: 3481.6, 300 sec: 2949.2). Total num frames: 2228224. Throughput: 0: 878.7. Samples: 54300. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:43:33,449][18333] Avg episode reward: [(0, '18.617')] [2023-07-04 15:43:38,444][18333] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 2969.7). Total num frames: 2244608. Throughput: 0: 883.5. Samples: 58564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:43:38,446][18333] Avg episode reward: [(0, '19.696')] [2023-07-04 15:43:39,910][22126] Updated weights for policy 0, policy_version 550 (0.0015) [2023-07-04 15:43:43,445][18333] Fps is (10 sec: 3686.3, 60 sec: 3481.5, 300 sec: 3036.0). Total num frames: 2265088. Throughput: 0: 888.3. Samples: 65212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:43:43,450][18333] Avg episode reward: [(0, '20.749')] [2023-07-04 15:43:48,448][18333] Fps is (10 sec: 4094.1, 60 sec: 3617.9, 300 sec: 3094.7). Total num frames: 2285568. Throughput: 0: 886.9. Samples: 68558. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:43:48,451][18333] Avg episode reward: [(0, '20.008')] [2023-07-04 15:43:50,566][22126] Updated weights for policy 0, policy_version 560 (0.0015) [2023-07-04 15:43:53,444][18333] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3061.3). Total num frames: 2297856. Throughput: 0: 883.6. Samples: 72990. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:43:53,450][18333] Avg episode reward: [(0, '20.912')] [2023-07-04 15:43:58,444][18333] Fps is (10 sec: 2868.5, 60 sec: 3481.6, 300 sec: 3072.1). Total num frames: 2314240. Throughput: 0: 878.1. Samples: 77072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:43:58,453][18333] Avg episode reward: [(0, '21.782')] [2023-07-04 15:43:58,465][22113] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000565_2314240.pth... [2023-07-04 15:43:58,661][22113] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000448_1835008.pth [2023-07-04 15:43:58,681][22113] Saving new best policy, reward=21.782! [2023-07-04 15:44:03,276][22126] Updated weights for policy 0, policy_version 570 (0.0027) [2023-07-04 15:44:03,443][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3120.9). Total num frames: 2334720. Throughput: 0: 873.0. Samples: 79712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:44:03,450][18333] Avg episode reward: [(0, '21.747')] [2023-07-04 15:44:08,444][18333] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3165.2). Total num frames: 2355200. Throughput: 0: 876.9. Samples: 86482. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:44:08,446][18333] Avg episode reward: [(0, '22.334')] [2023-07-04 15:44:08,461][22113] Saving new best policy, reward=22.334! [2023-07-04 15:44:13,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3170.1). Total num frames: 2371584. Throughput: 0: 877.7. Samples: 91688. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:44:13,447][18333] Avg episode reward: [(0, '22.763')] [2023-07-04 15:44:13,454][22113] Saving new best policy, reward=22.763! [2023-07-04 15:44:14,147][22126] Updated weights for policy 0, policy_version 580 (0.0021) [2023-07-04 15:44:18,444][18333] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3140.3). Total num frames: 2383872. Throughput: 0: 873.9. Samples: 93626. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:44:18,448][18333] Avg episode reward: [(0, '23.183')] [2023-07-04 15:44:18,460][22113] Saving new best policy, reward=23.183! [2023-07-04 15:44:23,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3345.4, 300 sec: 3145.8). Total num frames: 2400256. Throughput: 0: 870.7. Samples: 97744. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:44:23,446][18333] Avg episode reward: [(0, '21.347')] [2023-07-04 15:44:26,586][22126] Updated weights for policy 0, policy_version 590 (0.0015) [2023-07-04 15:44:28,444][18333] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3182.4). Total num frames: 2420736. Throughput: 0: 869.7. Samples: 104346. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:44:28,445][18333] Avg episode reward: [(0, '21.713')] [2023-07-04 15:44:33,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3216.2). Total num frames: 2441216. Throughput: 0: 868.1. Samples: 107620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:44:33,446][18333] Avg episode reward: [(0, '20.100')] [2023-07-04 15:44:38,163][22126] Updated weights for policy 0, policy_version 600 (0.0012) [2023-07-04 15:44:38,445][18333] Fps is (10 sec: 3686.0, 60 sec: 3549.8, 300 sec: 3218.3). Total num frames: 2457600. Throughput: 0: 866.4. Samples: 111978. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:44:38,454][18333] Avg episode reward: [(0, '20.039')] [2023-07-04 15:44:43,444][18333] Fps is (10 sec: 2867.0, 60 sec: 3413.4, 300 sec: 3192.1). Total num frames: 2469888. Throughput: 0: 870.0. Samples: 116222. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:44:43,449][18333] Avg episode reward: [(0, '19.196')] [2023-07-04 15:44:48,443][18333] Fps is (10 sec: 3686.8, 60 sec: 3481.9, 300 sec: 3249.6). Total num frames: 2494464. Throughput: 0: 875.0. Samples: 119086. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) [2023-07-04 15:44:48,446][18333] Avg episode reward: [(0, '19.491')] [2023-07-04 15:44:49,316][22126] Updated weights for policy 0, policy_version 610 (0.0022) [2023-07-04 15:44:53,444][18333] Fps is (10 sec: 4505.8, 60 sec: 3618.1, 300 sec: 3276.9). Total num frames: 2514944. Throughput: 0: 873.4. Samples: 125786. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:44:53,446][18333] Avg episode reward: [(0, '19.441')] [2023-07-04 15:44:58,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3251.3). Total num frames: 2527232. Throughput: 0: 868.0. Samples: 130746. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) [2023-07-04 15:44:58,446][18333] Avg episode reward: [(0, '20.312')] [2023-07-04 15:45:01,652][22126] Updated weights for policy 0, policy_version 620 (0.0012) [2023-07-04 15:45:03,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3252.1). Total num frames: 2543616. Throughput: 0: 870.8. Samples: 132812. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:45:03,446][18333] Avg episode reward: [(0, '20.249')] [2023-07-04 15:45:08,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3252.8). Total num frames: 2560000. Throughput: 0: 875.5. Samples: 137142. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:45:08,446][18333] Avg episode reward: [(0, '19.822')] [2023-07-04 15:45:12,866][22126] Updated weights for policy 0, policy_version 630 (0.0017) [2023-07-04 15:45:13,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3276.9). Total num frames: 2580480. Throughput: 0: 875.6. Samples: 143748. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:45:13,446][18333] Avg episode reward: [(0, '20.569')] [2023-07-04 15:45:18,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3299.6). Total num frames: 2600960. Throughput: 0: 875.8. Samples: 147032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:45:18,446][18333] Avg episode reward: [(0, '21.334')] [2023-07-04 15:45:23,445][18333] Fps is (10 sec: 3276.2, 60 sec: 3549.8, 300 sec: 3276.8). Total num frames: 2613248. Throughput: 0: 881.4. Samples: 151640. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:45:23,447][18333] Avg episode reward: [(0, '21.972')] [2023-07-04 15:45:25,316][22126] Updated weights for policy 0, policy_version 640 (0.0026) [2023-07-04 15:45:28,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3276.9). Total num frames: 2629632. Throughput: 0: 877.9. Samples: 155728. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:45:28,447][18333] Avg episode reward: [(0, '21.312')] [2023-07-04 15:45:33,444][18333] Fps is (10 sec: 3687.0, 60 sec: 3481.6, 300 sec: 3297.9). Total num frames: 2650112. Throughput: 0: 875.6. Samples: 158490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:45:33,446][18333] Avg episode reward: [(0, '21.985')] [2023-07-04 15:45:36,100][22126] Updated weights for policy 0, policy_version 650 (0.0019) [2023-07-04 15:45:38,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3317.8). Total num frames: 2670592. Throughput: 0: 874.2. Samples: 165124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:45:38,446][18333] Avg episode reward: [(0, '21.232')] [2023-07-04 15:45:43,446][18333] Fps is (10 sec: 3685.4, 60 sec: 3618.0, 300 sec: 3316.8). Total num frames: 2686976. Throughput: 0: 876.4. Samples: 170188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:45:43,449][18333] Avg episode reward: [(0, '20.670')] [2023-07-04 15:45:48,444][18333] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3296.4). Total num frames: 2699264. Throughput: 0: 877.1. Samples: 172282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:45:48,449][18333] Avg episode reward: [(0, '19.693')] [2023-07-04 15:45:49,212][22126] Updated weights for policy 0, policy_version 660 (0.0027) [2023-07-04 15:45:53,444][18333] Fps is (10 sec: 2867.9, 60 sec: 3345.1, 300 sec: 3295.9). Total num frames: 2715648. Throughput: 0: 876.9. Samples: 176604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:45:53,446][18333] Avg episode reward: [(0, '18.681')] [2023-07-04 15:45:58,444][18333] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3332.7). Total num frames: 2740224. Throughput: 0: 877.8. Samples: 183250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:45:58,453][18333] Avg episode reward: [(0, '17.263')] [2023-07-04 15:45:58,465][22113] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000669_2740224.pth... [2023-07-04 15:45:58,593][22113] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth [2023-07-04 15:45:59,224][22126] Updated weights for policy 0, policy_version 670 (0.0013) [2023-07-04 15:46:03,446][18333] Fps is (10 sec: 4094.9, 60 sec: 3549.7, 300 sec: 3331.4). Total num frames: 2756608. Throughput: 0: 878.6. Samples: 186570. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:46:03,451][18333] Avg episode reward: [(0, '17.009')] [2023-07-04 15:46:08,443][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3330.3). Total num frames: 2772992. Throughput: 0: 872.7. Samples: 190908. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:46:08,445][18333] Avg episode reward: [(0, '17.890')] [2023-07-04 15:46:12,802][22126] Updated weights for policy 0, policy_version 680 (0.0017) [2023-07-04 15:46:13,444][18333] Fps is (10 sec: 2868.0, 60 sec: 3413.3, 300 sec: 3311.7). Total num frames: 2785280. Throughput: 0: 874.4. Samples: 195076. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:46:13,447][18333] Avg episode reward: [(0, '18.755')] [2023-07-04 15:46:18,444][18333] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3328.0). Total num frames: 2805760. Throughput: 0: 880.7. Samples: 198124. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:46:18,451][18333] Avg episode reward: [(0, '19.946')] [2023-07-04 15:46:22,319][22126] Updated weights for policy 0, policy_version 690 (0.0015) [2023-07-04 15:46:23,444][18333] Fps is (10 sec: 4505.7, 60 sec: 3618.2, 300 sec: 3360.4). Total num frames: 2830336. Throughput: 0: 881.1. Samples: 204772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:46:23,446][18333] Avg episode reward: [(0, '21.407')] [2023-07-04 15:46:28,444][18333] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3342.4). Total num frames: 2842624. Throughput: 0: 875.4. Samples: 209578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:46:28,446][18333] Avg episode reward: [(0, '21.690')] [2023-07-04 15:46:33,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3341.1). Total num frames: 2859008. Throughput: 0: 875.0. Samples: 211656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:46:33,446][18333] Avg episode reward: [(0, '23.014')] [2023-07-04 15:46:36,290][22126] Updated weights for policy 0, policy_version 700 (0.0014) [2023-07-04 15:46:38,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3339.9). Total num frames: 2875392. Throughput: 0: 881.8. Samples: 216284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:46:38,449][18333] Avg episode reward: [(0, '22.682')] [2023-07-04 15:46:43,443][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.8, 300 sec: 3354.1). Total num frames: 2895872. Throughput: 0: 880.4. Samples: 222870. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:46:43,447][18333] Avg episode reward: [(0, '23.860')] [2023-07-04 15:46:43,454][22113] Saving new best policy, reward=23.860! [2023-07-04 15:46:45,418][22126] Updated weights for policy 0, policy_version 710 (0.0012) [2023-07-04 15:46:48,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3367.9). Total num frames: 2916352. Throughput: 0: 876.8. Samples: 226024. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:46:48,446][18333] Avg episode reward: [(0, '24.045')] [2023-07-04 15:46:48,452][22113] Saving new best policy, reward=24.045! [2023-07-04 15:46:53,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3351.3). Total num frames: 2928640. Throughput: 0: 873.4. Samples: 230210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:46:53,446][18333] Avg episode reward: [(0, '23.673')] [2023-07-04 15:46:58,444][18333] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3350.0). Total num frames: 2945024. Throughput: 0: 874.7. Samples: 234436. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:46:58,449][18333] Avg episode reward: [(0, '23.703')] [2023-07-04 15:46:59,263][22126] Updated weights for policy 0, policy_version 720 (0.0019) [2023-07-04 15:47:03,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.8, 300 sec: 3363.1). Total num frames: 2965504. Throughput: 0: 879.2. Samples: 237686. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:47:03,449][18333] Avg episode reward: [(0, '24.067')] [2023-07-04 15:47:03,454][22113] Saving new best policy, reward=24.067! [2023-07-04 15:47:08,444][18333] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3375.7). Total num frames: 2985984. Throughput: 0: 878.6. Samples: 244308. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:47:08,453][18333] Avg episode reward: [(0, '23.020')] [2023-07-04 15:47:08,772][22126] Updated weights for policy 0, policy_version 730 (0.0014) [2023-07-04 15:47:13,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3374.0). Total num frames: 3002368. Throughput: 0: 878.3. Samples: 249102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:47:13,451][18333] Avg episode reward: [(0, '23.164')] [2023-07-04 15:47:18,444][18333] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 3014656. Throughput: 0: 878.1. Samples: 251172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:47:18,447][18333] Avg episode reward: [(0, '23.238')] [2023-07-04 15:47:22,534][22126] Updated weights for policy 0, policy_version 740 (0.0022) [2023-07-04 15:47:23,443][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 3035136. Throughput: 0: 878.0. Samples: 255792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:47:23,449][18333] Avg episode reward: [(0, '22.522')] [2023-07-04 15:47:28,444][18333] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3055616. Throughput: 0: 880.2. Samples: 262480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:47:28,446][18333] Avg episode reward: [(0, '23.105')] [2023-07-04 15:47:32,139][22126] Updated weights for policy 0, policy_version 750 (0.0012) [2023-07-04 15:47:33,446][18333] Fps is (10 sec: 3685.6, 60 sec: 3549.7, 300 sec: 3498.9). Total num frames: 3072000. Throughput: 0: 881.4. Samples: 265690. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:47:33,454][18333] Avg episode reward: [(0, '22.716')] [2023-07-04 15:47:38,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3088384. Throughput: 0: 881.5. Samples: 269878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:47:38,450][18333] Avg episode reward: [(0, '23.030')] [2023-07-04 15:47:43,443][18333] Fps is (10 sec: 3277.5, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 3104768. Throughput: 0: 884.0. Samples: 274216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:47:43,451][18333] Avg episode reward: [(0, '22.901')] [2023-07-04 15:47:45,377][22126] Updated weights for policy 0, policy_version 760 (0.0024) [2023-07-04 15:47:48,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3125248. Throughput: 0: 885.2. Samples: 277518. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:47:48,451][18333] Avg episode reward: [(0, '22.048')] [2023-07-04 15:47:53,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3145728. Throughput: 0: 885.5. Samples: 284154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:47:53,449][18333] Avg episode reward: [(0, '22.074')] [2023-07-04 15:47:55,522][22126] Updated weights for policy 0, policy_version 770 (0.0012) [2023-07-04 15:47:58,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3158016. Throughput: 0: 882.9. Samples: 288832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-07-04 15:47:58,448][18333] Avg episode reward: [(0, '22.513')] [2023-07-04 15:47:58,459][22113] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000771_3158016.pth... [2023-07-04 15:47:58,641][22113] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000565_2314240.pth [2023-07-04 15:48:03,444][18333] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3498.9). Total num frames: 3174400. Throughput: 0: 880.9. Samples: 290812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:48:03,454][18333] Avg episode reward: [(0, '22.144')] [2023-07-04 15:48:08,381][22126] Updated weights for policy 0, policy_version 780 (0.0017) [2023-07-04 15:48:08,443][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3194880. Throughput: 0: 881.0. Samples: 295436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:48:08,448][18333] Avg episode reward: [(0, '21.483')] [2023-07-04 15:48:13,444][18333] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3215360. Throughput: 0: 884.0. Samples: 302262. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:48:13,445][18333] Avg episode reward: [(0, '22.172')] [2023-07-04 15:48:18,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3499.0). Total num frames: 3231744. Throughput: 0: 885.5. Samples: 305536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:48:18,446][18333] Avg episode reward: [(0, '23.374')] [2023-07-04 15:48:18,935][22126] Updated weights for policy 0, policy_version 790 (0.0021) [2023-07-04 15:48:23,443][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3248128. Throughput: 0: 885.7. Samples: 309734. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:48:23,446][18333] Avg episode reward: [(0, '23.333')] [2023-07-04 15:48:28,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 3260416. Throughput: 0: 884.2. Samples: 314006. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:48:28,447][18333] Avg episode reward: [(0, '23.753')] [2023-07-04 15:48:31,688][22126] Updated weights for policy 0, policy_version 800 (0.0014) [2023-07-04 15:48:33,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3512.8). Total num frames: 3280896. Throughput: 0: 876.0. Samples: 316940. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:48:33,446][18333] Avg episode reward: [(0, '23.964')] [2023-07-04 15:48:38,444][18333] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3305472. Throughput: 0: 873.5. Samples: 323462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:48:38,446][18333] Avg episode reward: [(0, '24.770')] [2023-07-04 15:48:38,458][22113] Saving new best policy, reward=24.770! [2023-07-04 15:48:42,139][22126] Updated weights for policy 0, policy_version 810 (0.0017) [2023-07-04 15:48:43,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3317760. Throughput: 0: 879.9. Samples: 328428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:48:43,446][18333] Avg episode reward: [(0, '24.421')] [2023-07-04 15:48:48,444][18333] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 3334144. Throughput: 0: 882.4. Samples: 330520. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-07-04 15:48:48,449][18333] Avg episode reward: [(0, '23.703')] [2023-07-04 15:48:53,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3350528. Throughput: 0: 882.3. Samples: 335138. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:48:53,448][18333] Avg episode reward: [(0, '23.282')] [2023-07-04 15:48:54,630][22126] Updated weights for policy 0, policy_version 820 (0.0031) [2023-07-04 15:48:58,444][18333] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3371008. Throughput: 0: 876.4. Samples: 341698. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-07-04 15:48:58,447][18333] Avg episode reward: [(0, '23.090')] [2023-07-04 15:49:03,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 3391488. Throughput: 0: 879.3. Samples: 345104. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:49:03,448][18333] Avg episode reward: [(0, '22.572')] [2023-07-04 15:49:05,498][22126] Updated weights for policy 0, policy_version 830 (0.0012) [2023-07-04 15:49:08,448][18333] Fps is (10 sec: 3275.2, 60 sec: 3481.3, 300 sec: 3498.9). Total num frames: 3403776. Throughput: 0: 883.2. Samples: 349484. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:49:08,451][18333] Avg episode reward: [(0, '23.881')] [2023-07-04 15:49:13,444][18333] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3420160. Throughput: 0: 883.4. Samples: 353758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:13,447][18333] Avg episode reward: [(0, '23.167')] [2023-07-04 15:49:17,533][22126] Updated weights for policy 0, policy_version 840 (0.0022) [2023-07-04 15:49:18,444][18333] Fps is (10 sec: 3688.1, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3440640. Throughput: 0: 883.9. Samples: 356714. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:49:18,450][18333] Avg episode reward: [(0, '23.383')] [2023-07-04 15:49:23,444][18333] Fps is (10 sec: 4505.9, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 3465216. Throughput: 0: 889.1. Samples: 363470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:23,450][18333] Avg episode reward: [(0, '22.627')] [2023-07-04 15:49:28,447][18333] Fps is (10 sec: 3685.1, 60 sec: 3617.9, 300 sec: 3512.8). Total num frames: 3477504. Throughput: 0: 890.7. Samples: 368512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:28,450][18333] Avg episode reward: [(0, '22.567')] [2023-07-04 15:49:28,852][22126] Updated weights for policy 0, policy_version 850 (0.0016) [2023-07-04 15:49:33,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3512.9). Total num frames: 3493888. Throughput: 0: 889.2. Samples: 370536. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:49:33,449][18333] Avg episode reward: [(0, '21.095')] [2023-07-04 15:49:38,444][18333] Fps is (10 sec: 3278.0, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 3510272. Throughput: 0: 886.3. Samples: 375020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:38,446][18333] Avg episode reward: [(0, '19.977')] [2023-07-04 15:49:40,792][22126] Updated weights for policy 0, policy_version 860 (0.0012) [2023-07-04 15:49:43,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3530752. Throughput: 0: 888.2. Samples: 381666. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:43,448][18333] Avg episode reward: [(0, '21.107')] [2023-07-04 15:49:48,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 3551232. Throughput: 0: 888.3. Samples: 385078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:48,450][18333] Avg episode reward: [(0, '21.707')] [2023-07-04 15:49:52,023][22126] Updated weights for policy 0, policy_version 870 (0.0027) [2023-07-04 15:49:53,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3563520. Throughput: 0: 889.1. Samples: 389490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:49:53,449][18333] Avg episode reward: [(0, '22.408')] [2023-07-04 15:49:58,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 3579904. Throughput: 0: 890.4. Samples: 393826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:49:58,450][18333] Avg episode reward: [(0, '22.977')] [2023-07-04 15:49:58,463][22113] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000874_3579904.pth... [2023-07-04 15:49:58,653][22113] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000669_2740224.pth [2023-07-04 15:50:03,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3600384. Throughput: 0: 886.4. Samples: 396600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:50:03,450][18333] Avg episode reward: [(0, '23.005')] [2023-07-04 15:50:03,827][22126] Updated weights for policy 0, policy_version 880 (0.0014) [2023-07-04 15:50:08,445][18333] Fps is (10 sec: 4095.3, 60 sec: 3618.3, 300 sec: 3526.7). Total num frames: 3620864. Throughput: 0: 881.2. Samples: 403124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:50:08,448][18333] Avg episode reward: [(0, '24.137')] [2023-07-04 15:50:13,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 3637248. Throughput: 0: 884.1. Samples: 408292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:50:13,449][18333] Avg episode reward: [(0, '24.242')] [2023-07-04 15:50:15,313][22126] Updated weights for policy 0, policy_version 890 (0.0015) [2023-07-04 15:50:18,444][18333] Fps is (10 sec: 3277.4, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3653632. Throughput: 0: 886.8. Samples: 410444. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:50:18,450][18333] Avg episode reward: [(0, '23.530')] [2023-07-04 15:50:23,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 3670016. Throughput: 0: 883.8. Samples: 414790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:50:23,448][18333] Avg episode reward: [(0, '22.577')] [2023-07-04 15:50:26,894][22126] Updated weights for policy 0, policy_version 900 (0.0023) [2023-07-04 15:50:28,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3550.1, 300 sec: 3526.7). Total num frames: 3690496. Throughput: 0: 880.9. Samples: 421306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:50:28,446][18333] Avg episode reward: [(0, '23.107')] [2023-07-04 15:50:33,444][18333] Fps is (10 sec: 4096.1, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3710976. Throughput: 0: 878.8. Samples: 424622. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:50:33,446][18333] Avg episode reward: [(0, '24.071')] [2023-07-04 15:50:38,425][22126] Updated weights for policy 0, policy_version 910 (0.0012) [2023-07-04 15:50:38,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3526.8). Total num frames: 3727360. Throughput: 0: 882.0. Samples: 429182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-07-04 15:50:38,445][18333] Avg episode reward: [(0, '23.988')] [2023-07-04 15:50:43,445][18333] Fps is (10 sec: 2866.7, 60 sec: 3481.5, 300 sec: 3526.7). Total num frames: 3739648. Throughput: 0: 882.1. Samples: 433522. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:50:43,448][18333] Avg episode reward: [(0, '23.037')] [2023-07-04 15:50:48,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 3760128. Throughput: 0: 879.2. Samples: 436166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:50:48,451][18333] Avg episode reward: [(0, '24.210')] [2023-07-04 15:50:49,985][22126] Updated weights for policy 0, policy_version 920 (0.0040) [2023-07-04 15:50:53,444][18333] Fps is (10 sec: 4096.7, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3780608. Throughput: 0: 883.6. Samples: 442886. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:50:53,452][18333] Avg episode reward: [(0, '25.008')] [2023-07-04 15:50:53,455][22113] Saving new best policy, reward=25.008! [2023-07-04 15:50:58,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3526.8). Total num frames: 3796992. Throughput: 0: 882.3. Samples: 447996. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:50:58,450][18333] Avg episode reward: [(0, '24.569')] [2023-07-04 15:51:02,192][22126] Updated weights for policy 0, policy_version 930 (0.0018) [2023-07-04 15:51:03,444][18333] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 3809280. Throughput: 0: 880.2. Samples: 450052. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-07-04 15:51:03,449][18333] Avg episode reward: [(0, '24.889')] [2023-07-04 15:51:08,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3540.6). Total num frames: 3829760. Throughput: 0: 878.9. Samples: 454340. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:51:08,446][18333] Avg episode reward: [(0, '23.066')] [2023-07-04 15:51:13,187][22126] Updated weights for policy 0, policy_version 940 (0.0021) [2023-07-04 15:51:13,444][18333] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 3850240. Throughput: 0: 883.0. Samples: 461040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:51:13,446][18333] Avg episode reward: [(0, '21.575')] [2023-07-04 15:51:18,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3870720. Throughput: 0: 884.0. Samples: 464404. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:51:18,450][18333] Avg episode reward: [(0, '21.145')] [2023-07-04 15:51:23,444][18333] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3883008. Throughput: 0: 887.0. Samples: 469096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-07-04 15:51:23,450][18333] Avg episode reward: [(0, '21.746')] [2023-07-04 15:51:25,562][22126] Updated weights for policy 0, policy_version 950 (0.0019) [2023-07-04 15:51:28,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3899392. Throughput: 0: 883.7. Samples: 473286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:51:28,445][18333] Avg episode reward: [(0, '20.778')] [2023-07-04 15:51:33,443][18333] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 3919872. Throughput: 0: 882.5. Samples: 475878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-07-04 15:51:33,447][18333] Avg episode reward: [(0, '22.279')] [2023-07-04 15:51:36,134][22126] Updated weights for policy 0, policy_version 960 (0.0030) [2023-07-04 15:51:38,444][18333] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 3940352. Throughput: 0: 883.6. Samples: 482650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:51:38,446][18333] Avg episode reward: [(0, '22.723')] [2023-07-04 15:51:43,444][18333] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3526.7). Total num frames: 3956736. Throughput: 0: 889.6. Samples: 488030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-07-04 15:51:43,449][18333] Avg episode reward: [(0, '23.297')] [2023-07-04 15:51:48,444][18333] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3969024. Throughput: 0: 890.3. Samples: 490116. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-07-04 15:51:48,450][18333] Avg episode reward: [(0, '24.658')] [2023-07-04 15:51:48,604][22126] Updated weights for policy 0, policy_version 970 (0.0014) [2023-07-04 15:51:53,444][18333] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 3989504. Throughput: 0: 890.2. Samples: 494398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-07-04 15:51:53,446][18333] Avg episode reward: [(0, '23.262')] [2023-07-04 15:51:57,392][22113] Stopping Batcher_0... [2023-07-04 15:51:57,394][22113] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-07-04 15:51:57,395][18333] Component Batcher_0 stopped! [2023-07-04 15:51:57,395][22113] Loop batcher_evt_loop terminating... [2023-07-04 15:51:57,459][18333] Component RolloutWorker_w0 stopped! [2023-07-04 15:51:57,460][22138] Stopping RolloutWorker_w7... [2023-07-04 15:51:57,465][18333] Component RolloutWorker_w7 stopped! [2023-07-04 15:51:57,473][22138] Loop rollout_proc7_evt_loop terminating... [2023-07-04 15:51:57,470][22130] Stopping RolloutWorker_w0... [2023-07-04 15:51:57,479][18333] Component RolloutWorker_w2 stopped! [2023-07-04 15:51:57,484][22134] Stopping RolloutWorker_w2... [2023-07-04 15:51:57,478][22126] Weights refcount: 2 0 [2023-07-04 15:51:57,474][22130] Loop rollout_proc0_evt_loop terminating... [2023-07-04 15:51:57,492][22136] Stopping RolloutWorker_w5... [2023-07-04 15:51:57,491][18333] Component InferenceWorker_p0-w0 stopped! [2023-07-04 15:51:57,496][22132] Stopping RolloutWorker_w3... [2023-07-04 15:51:57,496][18333] Component RolloutWorker_w5 stopped! [2023-07-04 15:51:57,500][18333] Component RolloutWorker_w3 stopped! [2023-07-04 15:51:57,493][22136] Loop rollout_proc5_evt_loop terminating... [2023-07-04 15:51:57,497][22132] Loop rollout_proc3_evt_loop terminating... [2023-07-04 15:51:57,505][22126] Stopping InferenceWorker_p0-w0... [2023-07-04 15:51:57,506][18333] Component RolloutWorker_w1 stopped! [2023-07-04 15:51:57,506][22133] Stopping RolloutWorker_w1... [2023-07-04 15:51:57,505][22126] Loop inference_proc0-0_evt_loop terminating... [2023-07-04 15:51:57,485][22134] Loop rollout_proc2_evt_loop terminating... [2023-07-04 15:51:57,510][22133] Loop rollout_proc1_evt_loop terminating... [2023-07-04 15:51:57,520][18333] Component RolloutWorker_w4 stopped! [2023-07-04 15:51:57,524][22135] Stopping RolloutWorker_w4... [2023-07-04 15:51:57,525][18333] Component RolloutWorker_w6 stopped! [2023-07-04 15:51:57,531][22137] Stopping RolloutWorker_w6... [2023-07-04 15:51:57,538][22135] Loop rollout_proc4_evt_loop terminating... [2023-07-04 15:51:57,532][22137] Loop rollout_proc6_evt_loop terminating... [2023-07-04 15:51:57,552][22113] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000771_3158016.pth [2023-07-04 15:51:57,564][22113] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-07-04 15:51:57,729][18333] Component LearnerWorker_p0 stopped! [2023-07-04 15:51:57,736][18333] Waiting for process learner_proc0 to stop... [2023-07-04 15:51:57,741][22113] Stopping LearnerWorker_p0... [2023-07-04 15:51:57,741][22113] Loop learner_proc0_evt_loop terminating... [2023-07-04 15:51:58,816][18333] Waiting for process inference_proc0-0 to join... [2023-07-04 15:51:58,821][18333] Waiting for process rollout_proc0 to join... [2023-07-04 15:52:00,036][18333] Waiting for process rollout_proc1 to join... [2023-07-04 15:52:00,199][18333] Waiting for process rollout_proc2 to join... [2023-07-04 15:52:00,201][18333] Waiting for process rollout_proc3 to join... [2023-07-04 15:52:00,205][18333] Waiting for process rollout_proc4 to join... [2023-07-04 15:52:00,208][18333] Waiting for process rollout_proc5 to join... [2023-07-04 15:52:00,211][18333] Waiting for process rollout_proc6 to join... [2023-07-04 15:52:00,215][18333] Waiting for process rollout_proc7 to join... [2023-07-04 15:52:00,217][18333] Batcher 0 profile tree view: batching: 13.2010, releasing_batches: 0.0137 [2023-07-04 15:52:00,228][18333] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0055 wait_policy_total: 279.2133 update_model: 4.1925 weight_update: 0.0025 one_step: 0.0025 handle_policy_step: 276.3185 deserialize: 7.4829, stack: 1.5020, obs_to_device_normalize: 59.2136, forward: 138.0503, send_messages: 14.2878 prepare_outputs: 42.0752 to_cpu: 25.6465 [2023-07-04 15:52:00,230][18333] Learner 0 profile tree view: misc: 0.0025, prepare_batch: 9.8375 train: 39.7930 epoch_init: 0.0146, minibatch_init: 0.0032, losses_postprocess: 0.2770, kl_divergence: 0.3191, after_optimizer: 1.9968 calculate_losses: 12.3821 losses_init: 0.0018, forward_head: 1.0349, bptt_initial: 7.7057, tail: 0.5452, advantages_returns: 0.1694, losses: 1.6271 bptt: 1.1473 bptt_forward_core: 1.0992 update: 24.4481 clip: 0.7666 [2023-07-04 15:52:00,231][18333] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.2117, enqueue_policy_requests: 75.9103, env_step: 429.9611, overhead: 11.8503, complete_rollouts: 3.8320 save_policy_outputs: 10.6986 split_output_tensors: 5.2931 [2023-07-04 15:52:00,234][18333] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.1973, enqueue_policy_requests: 80.5694, env_step: 427.3777, overhead: 11.8470, complete_rollouts: 3.6744 save_policy_outputs: 10.4136 split_output_tensors: 5.1270 [2023-07-04 15:52:00,236][18333] Loop Runner_EvtLoop terminating... [2023-07-04 15:52:00,239][18333] Runner profile tree view: main_loop: 600.1708 [2023-07-04 15:52:00,244][18333] Collected {0: 4005888}, FPS: 3330.5 [2023-07-04 15:52:13,757][18333] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-07-04 15:52:13,759][18333] Overriding arg 'num_workers' with value 1 passed from command line [2023-07-04 15:52:13,761][18333] Adding new argument 'no_render'=True that is not in the saved config file! [2023-07-04 15:52:13,763][18333] Adding new argument 'save_video'=True that is not in the saved config file! [2023-07-04 15:52:13,764][18333] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-07-04 15:52:13,766][18333] Adding new argument 'video_name'=None that is not in the saved config file! [2023-07-04 15:52:13,768][18333] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-07-04 15:52:13,770][18333] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-07-04 15:52:13,771][18333] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-07-04 15:52:13,772][18333] Adding new argument 'hf_repository'='HilbertS/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-07-04 15:52:13,773][18333] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-07-04 15:52:13,775][18333] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-07-04 15:52:13,776][18333] Adding new argument 'train_script'=None that is not in the saved config file! [2023-07-04 15:52:13,777][18333] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-07-04 15:52:13,778][18333] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-07-04 15:52:13,801][18333] RunningMeanStd input shape: (3, 72, 128) [2023-07-04 15:52:13,803][18333] RunningMeanStd input shape: (1,) [2023-07-04 15:52:13,817][18333] ConvEncoder: input_channels=3 [2023-07-04 15:52:13,852][18333] Conv encoder output size: 512 [2023-07-04 15:52:13,853][18333] Policy head output size: 512 [2023-07-04 15:52:13,874][18333] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-07-04 15:52:14,537][18333] Num frames 100... [2023-07-04 15:52:14,657][18333] Num frames 200... [2023-07-04 15:52:14,787][18333] Num frames 300... [2023-07-04 15:52:14,937][18333] Num frames 400... [2023-07-04 15:52:15,068][18333] Num frames 500... [2023-07-04 15:52:15,218][18333] Num frames 600... [2023-07-04 15:52:15,353][18333] Num frames 700... [2023-07-04 15:52:15,493][18333] Num frames 800... [2023-07-04 15:52:15,632][18333] Num frames 900... [2023-07-04 15:52:15,754][18333] Num frames 1000... [2023-07-04 15:52:15,938][18333] Num frames 1100... [2023-07-04 15:52:16,144][18333] Num frames 1200... [2023-07-04 15:52:16,329][18333] Num frames 1300... [2023-07-04 15:52:16,515][18333] Num frames 1400... [2023-07-04 15:52:16,694][18333] Num frames 1500... [2023-07-04 15:52:16,885][18333] Num frames 1600... [2023-07-04 15:52:17,064][18333] Num frames 1700... [2023-07-04 15:52:17,250][18333] Num frames 1800... [2023-07-04 15:52:17,430][18333] Num frames 1900... [2023-07-04 15:52:17,609][18333] Num frames 2000... [2023-07-04 15:52:17,785][18333] Num frames 2100... [2023-07-04 15:52:17,840][18333] Avg episode rewards: #0: 54.999, true rewards: #0: 21.000 [2023-07-04 15:52:17,843][18333] Avg episode reward: 54.999, avg true_objective: 21.000 [2023-07-04 15:52:18,019][18333] Num frames 2200... [2023-07-04 15:52:18,202][18333] Num frames 2300... [2023-07-04 15:52:18,390][18333] Num frames 2400... [2023-07-04 15:52:18,579][18333] Num frames 2500... [2023-07-04 15:52:18,764][18333] Num frames 2600... [2023-07-04 15:52:18,945][18333] Num frames 2700... [2023-07-04 15:52:19,127][18333] Num frames 2800... [2023-07-04 15:52:19,311][18333] Num frames 2900... [2023-07-04 15:52:19,510][18333] Num frames 3000... [2023-07-04 15:52:19,694][18333] Num frames 3100... [2023-07-04 15:52:19,882][18333] Num frames 3200... [2023-07-04 15:52:20,067][18333] Num frames 3300... [2023-07-04 15:52:20,251][18333] Num frames 3400... [2023-07-04 15:52:20,436][18333] Num frames 3500... [2023-07-04 15:52:20,617][18333] Num frames 3600... [2023-07-04 15:52:20,802][18333] Num frames 3700... [2023-07-04 15:52:21,035][18333] Avg episode rewards: #0: 48.979, true rewards: #0: 18.980 [2023-07-04 15:52:21,037][18333] Avg episode reward: 48.979, avg true_objective: 18.980 [2023-07-04 15:52:21,047][18333] Num frames 3800... [2023-07-04 15:52:21,222][18333] Num frames 3900... [2023-07-04 15:52:21,353][18333] Num frames 4000... [2023-07-04 15:52:21,484][18333] Avg episode rewards: #0: 33.506, true rewards: #0: 13.507 [2023-07-04 15:52:21,486][18333] Avg episode reward: 33.506, avg true_objective: 13.507 [2023-07-04 15:52:21,552][18333] Num frames 4100... [2023-07-04 15:52:21,678][18333] Num frames 4200... [2023-07-04 15:52:21,808][18333] Num frames 4300... [2023-07-04 15:52:21,935][18333] Num frames 4400... [2023-07-04 15:52:22,072][18333] Num frames 4500... [2023-07-04 15:52:22,198][18333] Num frames 4600... [2023-07-04 15:52:22,332][18333] Num frames 4700... [2023-07-04 15:52:22,462][18333] Num frames 4800... [2023-07-04 15:52:22,603][18333] Num frames 4900... [2023-07-04 15:52:22,735][18333] Num frames 5000... [2023-07-04 15:52:22,869][18333] Num frames 5100... [2023-07-04 15:52:23,009][18333] Num frames 5200... [2023-07-04 15:52:23,142][18333] Num frames 5300... [2023-07-04 15:52:23,285][18333] Num frames 5400... [2023-07-04 15:52:23,422][18333] Num frames 5500... [2023-07-04 15:52:23,551][18333] Num frames 5600... [2023-07-04 15:52:23,632][18333] Avg episode rewards: #0: 35.050, true rewards: #0: 14.050 [2023-07-04 15:52:23,634][18333] Avg episode reward: 35.050, avg true_objective: 14.050 [2023-07-04 15:52:23,740][18333] Num frames 5700... [2023-07-04 15:52:23,866][18333] Num frames 5800... [2023-07-04 15:52:23,992][18333] Num frames 5900... [2023-07-04 15:52:24,140][18333] Avg episode rewards: #0: 28.944, true rewards: #0: 11.944 [2023-07-04 15:52:24,145][18333] Avg episode reward: 28.944, avg true_objective: 11.944 [2023-07-04 15:52:24,180][18333] Num frames 6000... [2023-07-04 15:52:24,305][18333] Num frames 6100... [2023-07-04 15:52:24,485][18333] Avg episode rewards: #0: 24.493, true rewards: #0: 10.327 [2023-07-04 15:52:24,487][18333] Avg episode reward: 24.493, avg true_objective: 10.327 [2023-07-04 15:52:24,497][18333] Num frames 6200... [2023-07-04 15:52:24,632][18333] Num frames 6300... [2023-07-04 15:52:24,754][18333] Num frames 6400... [2023-07-04 15:52:24,880][18333] Num frames 6500... [2023-07-04 15:52:24,999][18333] Num frames 6600... [2023-07-04 15:52:25,128][18333] Num frames 6700... [2023-07-04 15:52:25,257][18333] Num frames 6800... [2023-07-04 15:52:25,402][18333] Num frames 6900... [2023-07-04 15:52:25,545][18333] Num frames 7000... [2023-07-04 15:52:25,669][18333] Num frames 7100... [2023-07-04 15:52:25,796][18333] Num frames 7200... [2023-07-04 15:52:25,922][18333] Num frames 7300... [2023-07-04 15:52:26,053][18333] Num frames 7400... [2023-07-04 15:52:26,199][18333] Num frames 7500... [2023-07-04 15:52:26,327][18333] Num frames 7600... [2023-07-04 15:52:26,475][18333] Avg episode rewards: #0: 26.954, true rewards: #0: 10.954 [2023-07-04 15:52:26,478][18333] Avg episode reward: 26.954, avg true_objective: 10.954 [2023-07-04 15:52:26,517][18333] Num frames 7700... [2023-07-04 15:52:26,643][18333] Num frames 7800... [2023-07-04 15:52:26,774][18333] Num frames 7900... [2023-07-04 15:52:26,895][18333] Num frames 8000... [2023-07-04 15:52:27,014][18333] Num frames 8100... [2023-07-04 15:52:27,138][18333] Num frames 8200... [2023-07-04 15:52:27,258][18333] Num frames 8300... [2023-07-04 15:52:27,380][18333] Num frames 8400... [2023-07-04 15:52:27,517][18333] Num frames 8500... [2023-07-04 15:52:27,656][18333] Avg episode rewards: #0: 25.955, true rewards: #0: 10.705 [2023-07-04 15:52:27,658][18333] Avg episode reward: 25.955, avg true_objective: 10.705 [2023-07-04 15:52:27,703][18333] Num frames 8600... [2023-07-04 15:52:27,870][18333] Avg episode rewards: #0: 23.213, true rewards: #0: 9.658 [2023-07-04 15:52:27,872][18333] Avg episode reward: 23.213, avg true_objective: 9.658 [2023-07-04 15:52:27,885][18333] Num frames 8700... [2023-07-04 15:52:28,010][18333] Num frames 8800... [2023-07-04 15:52:28,130][18333] Num frames 8900... [2023-07-04 15:52:28,258][18333] Num frames 9000... [2023-07-04 15:52:28,381][18333] Num frames 9100... [2023-07-04 15:52:28,518][18333] Num frames 9200... [2023-07-04 15:52:28,579][18333] Avg episode rewards: #0: 21.704, true rewards: #0: 9.204 [2023-07-04 15:52:28,581][18333] Avg episode reward: 21.704, avg true_objective: 9.204 [2023-07-04 15:53:26,952][18333] Replay video saved to /content/train_dir/default_experiment/replay.mp4!