[2024-08-16 15:00:33,731][09795] Saving configuration to /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/config.json... [2024-08-16 15:00:33,732][09795] Rollout worker 0 uses device cpu [2024-08-16 15:00:33,732][09795] Rollout worker 1 uses device cpu [2024-08-16 15:00:33,732][09795] Rollout worker 2 uses device cpu [2024-08-16 15:00:33,733][09795] Rollout worker 3 uses device cpu [2024-08-16 15:00:33,733][09795] Rollout worker 4 uses device cpu [2024-08-16 15:00:33,733][09795] Rollout worker 5 uses device cpu [2024-08-16 15:00:33,733][09795] Rollout worker 6 uses device cpu [2024-08-16 15:00:33,733][09795] Rollout worker 7 uses device cpu [2024-08-16 15:00:33,773][09795] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2024-08-16 15:00:33,774][09795] InferenceWorker_p0-w0: min num requests: 2 [2024-08-16 15:00:33,804][09795] Starting all processes... [2024-08-16 15:00:33,805][09795] Starting process learner_proc0 [2024-08-16 15:00:34,179][09795] Starting all processes... [2024-08-16 15:00:34,183][09795] Starting process inference_proc0-0 [2024-08-16 15:00:34,183][09795] Starting process rollout_proc0 [2024-08-16 15:00:34,183][09795] Starting process rollout_proc1 [2024-08-16 15:00:34,184][09795] Starting process rollout_proc2 [2024-08-16 15:00:34,184][09795] Starting process rollout_proc3 [2024-08-16 15:00:34,184][09795] Starting process rollout_proc4 [2024-08-16 15:00:34,184][09795] Starting process rollout_proc5 [2024-08-16 15:00:34,184][09795] Starting process rollout_proc6 [2024-08-16 15:00:34,184][09795] Starting process rollout_proc7 [2024-08-16 15:00:36,347][19834] Worker 4 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,389][19831] Worker 0 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,463][19836] Worker 6 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,485][19830] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2024-08-16 15:00:36,485][19830] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2024-08-16 15:00:36,500][19830] Num visible devices: 1 [2024-08-16 15:00:36,512][19832] Worker 3 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,512][19835] Worker 2 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,522][19817] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2024-08-16 15:00:36,522][19817] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2024-08-16 15:00:36,535][19817] Num visible devices: 1 [2024-08-16 15:00:36,539][19817] Starting seed is not provided [2024-08-16 15:00:36,539][19817] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2024-08-16 15:00:36,539][19817] Initializing actor-critic model on device cuda:0 [2024-08-16 15:00:36,539][19817] RunningMeanStd input shape: (3, 72, 128) [2024-08-16 15:00:36,544][19817] RunningMeanStd input shape: (1,) [2024-08-16 15:00:36,550][19833] Worker 1 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,553][19817] ConvEncoder: input_channels=3 [2024-08-16 15:00:36,561][19838] Worker 5 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,584][19837] Worker 7 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2024-08-16 15:00:36,653][19817] Conv encoder output size: 512 [2024-08-16 15:00:36,653][19817] Policy head output size: 512 [2024-08-16 15:00:36,671][19817] Created Actor Critic model with architecture: [2024-08-16 15:00:36,671][19817] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2024-08-16 15:00:36,886][19817] Using optimizer [2024-08-16 15:00:37,506][19817] No checkpoints found [2024-08-16 15:00:37,506][19817] Did not load from checkpoint, starting from scratch! [2024-08-16 15:00:37,506][19817] Initialized policy 0 weights for model version 0 [2024-08-16 15:00:37,509][19817] LearnerWorker_p0 finished initialization! [2024-08-16 15:00:37,509][19817] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2024-08-16 15:00:37,655][19830] RunningMeanStd input shape: (3, 72, 128) [2024-08-16 15:00:37,656][19830] RunningMeanStd input shape: (1,) [2024-08-16 15:00:37,664][19830] ConvEncoder: input_channels=3 [2024-08-16 15:00:37,732][19830] Conv encoder output size: 512 [2024-08-16 15:00:37,732][19830] Policy head output size: 512 [2024-08-16 15:00:37,760][09795] Inference worker 0-0 is ready! [2024-08-16 15:00:37,760][09795] All inference workers are ready! Signal rollout workers to start! [2024-08-16 15:00:37,795][19834] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,796][19838] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,796][19831] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,796][19832] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,807][19833] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,807][19836] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,807][19835] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,810][19837] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:00:37,865][19832] VizDoom game.init() threw an exception ViZDoomUnexpectedExitException('Controlled ViZDoom instance exited unexpectedly.'). Terminate process... [2024-08-16 15:00:37,865][19832] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() Traceback (most recent call last): File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init self.game.init() vizdoom.vizdoom.ViZDoomUnexpectedExitException: Controlled ViZDoom instance exited unexpectedly. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init env_runner.init(self.timing) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init self._reset() File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/gymnasium/core.py", line 467, in reset return self.env.reset(seed=seed, options=options) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sample_factory/algo/utils/make_env.py", line 125, in reset obs, info = self.env.reset(**kwargs) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sample_factory/algo/utils/make_env.py", line 110, in reset obs, info = self.env.reset(**kwargs) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset return self.env.reset(**kwargs) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/gymnasium/core.py", line 515, in reset obs, info = self.env.reset(seed=seed, options=options) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sample_factory/envs/env_wrappers.py", line 82, in reset obs, info = self.env.reset(**kwargs) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/gymnasium/core.py", line 467, in reset return self.env.reset(seed=seed, options=options) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset return self.env.reset(**kwargs) File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset self._ensure_initialized() File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized self.initialize() File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize self._game_init() File "/media/nguyen-duc-huy/E/anaconda3/envs/rl-project/lib/python3.10/site-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init raise EnvCriticalError() sample_factory.envs.env_utils.EnvCriticalError [2024-08-16 15:00:37,866][19832] Unhandled exception in evt loop rollout_proc3_evt_loop [2024-08-16 15:00:38,007][19831] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,011][19836] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,013][19837] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,073][19838] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,076][19834] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,179][19836] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,180][19837] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,224][19831] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,226][19835] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,270][19833] Decorrelating experience for 0 frames... [2024-08-16 15:00:38,393][19838] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,396][19835] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,423][19836] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,423][19834] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,437][19833] Decorrelating experience for 32 frames... [2024-08-16 15:00:38,634][19836] Decorrelating experience for 96 frames... [2024-08-16 15:00:38,643][19838] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,652][19831] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,682][19833] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,682][19835] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,811][19834] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,870][19831] Decorrelating experience for 96 frames... [2024-08-16 15:00:38,871][19833] Decorrelating experience for 96 frames... [2024-08-16 15:00:38,880][19835] Decorrelating experience for 96 frames... [2024-08-16 15:00:38,898][19837] Decorrelating experience for 64 frames... [2024-08-16 15:00:38,999][19834] Decorrelating experience for 96 frames... [2024-08-16 15:00:39,034][19838] Decorrelating experience for 96 frames... [2024-08-16 15:00:39,229][19837] Decorrelating experience for 96 frames... [2024-08-16 15:00:39,423][09795] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2024-08-16 15:00:39,424][09795] Avg episode reward: [(0, '1.092')] [2024-08-16 15:00:39,814][19817] Signal inference workers to stop experience collection... [2024-08-16 15:00:39,818][19830] InferenceWorker_p0-w0: stopping experience collection [2024-08-16 15:00:41,087][19817] Signal inference workers to resume experience collection... [2024-08-16 15:00:41,088][19830] InferenceWorker_p0-w0: resuming experience collection [2024-08-16 15:00:43,180][19830] Updated weights for policy 0, policy_version 10 (0.0113) [2024-08-16 15:00:44,423][09795] Fps is (10 sec: 12287.7, 60 sec: 12287.7, 300 sec: 12287.7). Total num frames: 61440. Throughput: 0: 2707.1. Samples: 13536. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2024-08-16 15:00:44,424][09795] Avg episode reward: [(0, '4.447')] [2024-08-16 15:00:45,464][19830] Updated weights for policy 0, policy_version 20 (0.0008) [2024-08-16 15:00:47,818][19830] Updated weights for policy 0, policy_version 30 (0.0009) [2024-08-16 15:00:49,423][09795] Fps is (10 sec: 15155.2, 60 sec: 15155.2, 300 sec: 15155.2). Total num frames: 151552. Throughput: 0: 2645.4. Samples: 26454. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2024-08-16 15:00:49,424][09795] Avg episode reward: [(0, '4.560')] [2024-08-16 15:00:49,427][19817] Saving new best policy, reward=4.560! [2024-08-16 15:00:49,917][19830] Updated weights for policy 0, policy_version 40 (0.0008) [2024-08-16 15:00:52,105][19830] Updated weights for policy 0, policy_version 50 (0.0008) [2024-08-16 15:00:53,769][09795] Heartbeat connected on Batcher_0 [2024-08-16 15:00:53,779][09795] Heartbeat connected on LearnerWorker_p0 [2024-08-16 15:00:53,781][09795] Heartbeat connected on RolloutWorker_w0 [2024-08-16 15:00:53,781][09795] Heartbeat connected on RolloutWorker_w1 [2024-08-16 15:00:53,782][09795] Heartbeat connected on InferenceWorker_p0-w0 [2024-08-16 15:00:53,782][09795] Heartbeat connected on RolloutWorker_w2 [2024-08-16 15:00:53,787][09795] Heartbeat connected on RolloutWorker_w4 [2024-08-16 15:00:53,789][09795] Heartbeat connected on RolloutWorker_w5 [2024-08-16 15:00:53,792][09795] Heartbeat connected on RolloutWorker_w6 [2024-08-16 15:00:53,804][09795] Heartbeat connected on RolloutWorker_w7 [2024-08-16 15:00:54,170][19830] Updated weights for policy 0, policy_version 60 (0.0007) [2024-08-16 15:00:54,423][09795] Fps is (10 sec: 18841.8, 60 sec: 16657.1, 300 sec: 16657.1). Total num frames: 249856. Throughput: 0: 3682.7. Samples: 55240. Policy #0 lag: (min: 0.0, avg: 0.8, max: 1.0) [2024-08-16 15:00:54,424][09795] Avg episode reward: [(0, '4.325')] [2024-08-16 15:00:56,365][19830] Updated weights for policy 0, policy_version 70 (0.0007) [2024-08-16 15:00:58,557][19830] Updated weights for policy 0, policy_version 80 (0.0008) [2024-08-16 15:00:59,423][09795] Fps is (10 sec: 19251.3, 60 sec: 17203.2, 300 sec: 17203.2). Total num frames: 344064. Throughput: 0: 4178.0. Samples: 83560. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2024-08-16 15:00:59,424][09795] Avg episode reward: [(0, '4.428')] [2024-08-16 15:01:00,612][19830] Updated weights for policy 0, policy_version 90 (0.0008) [2024-08-16 15:01:02,841][19830] Updated weights for policy 0, policy_version 100 (0.0008) [2024-08-16 15:01:04,423][09795] Fps is (10 sec: 18841.4, 60 sec: 17530.8, 300 sec: 17530.8). Total num frames: 438272. Throughput: 0: 3926.3. Samples: 98158. Policy #0 lag: (min: 0.0, avg: 0.8, max: 1.0) [2024-08-16 15:01:04,424][09795] Avg episode reward: [(0, '4.555')] [2024-08-16 15:01:05,072][19830] Updated weights for policy 0, policy_version 110 (0.0009) [2024-08-16 15:01:07,573][19830] Updated weights for policy 0, policy_version 120 (0.0009) [2024-08-16 15:01:09,423][09795] Fps is (10 sec: 17612.6, 60 sec: 17339.7, 300 sec: 17339.7). Total num frames: 520192. Throughput: 0: 4151.1. Samples: 124534. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2024-08-16 15:01:09,424][09795] Avg episode reward: [(0, '4.401')] [2024-08-16 15:01:09,911][19830] Updated weights for policy 0, policy_version 130 (0.0009) [2024-08-16 15:01:12,264][19830] Updated weights for policy 0, policy_version 140 (0.0009) [2024-08-16 15:01:14,423][09795] Fps is (10 sec: 16793.6, 60 sec: 17320.2, 300 sec: 17320.2). Total num frames: 606208. Throughput: 0: 4290.8. Samples: 150180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2024-08-16 15:01:14,424][09795] Avg episode reward: [(0, '4.590')] [2024-08-16 15:01:14,426][19817] Saving new best policy, reward=4.590! [2024-08-16 15:01:14,726][19830] Updated weights for policy 0, policy_version 150 (0.0009) [2024-08-16 15:01:17,050][19830] Updated weights for policy 0, policy_version 160 (0.0009) [2024-08-16 15:01:19,279][19830] Updated weights for policy 0, policy_version 170 (0.0009) [2024-08-16 15:01:19,423][09795] Fps is (10 sec: 17612.8, 60 sec: 17408.0, 300 sec: 17408.0). Total num frames: 696320. Throughput: 0: 4074.9. Samples: 162998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2024-08-16 15:01:19,424][09795] Avg episode reward: [(0, '4.887')] [2024-08-16 15:01:19,428][19817] Saving new best policy, reward=4.887! [2024-08-16 15:01:21,441][19830] Updated weights for policy 0, policy_version 180 (0.0008) [2024-08-16 15:01:23,683][19830] Updated weights for policy 0, policy_version 190 (0.0008) [2024-08-16 15:01:24,423][09795] Fps is (10 sec: 18432.1, 60 sec: 17567.3, 300 sec: 17567.3). Total num frames: 790528. Throughput: 0: 4244.1. Samples: 190986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:01:24,424][09795] Avg episode reward: [(0, '4.891')] [2024-08-16 15:01:24,425][19817] Saving new best policy, reward=4.891! [2024-08-16 15:01:25,941][19830] Updated weights for policy 0, policy_version 200 (0.0009) [2024-08-16 15:01:28,119][19830] Updated weights for policy 0, policy_version 210 (0.0009) [2024-08-16 15:01:29,423][09795] Fps is (10 sec: 18432.1, 60 sec: 17612.8, 300 sec: 17612.8). Total num frames: 880640. Throughput: 0: 4556.1. Samples: 218558. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2024-08-16 15:01:29,424][09795] Avg episode reward: [(0, '5.945')] [2024-08-16 15:01:29,457][19817] Saving new best policy, reward=5.945! [2024-08-16 15:01:30,374][19830] Updated weights for policy 0, policy_version 220 (0.0009) [2024-08-16 15:01:32,570][19830] Updated weights for policy 0, policy_version 230 (0.0009) [2024-08-16 15:01:34,423][09795] Fps is (10 sec: 18432.0, 60 sec: 17724.5, 300 sec: 17724.5). Total num frames: 974848. Throughput: 0: 4574.5. Samples: 232306. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2024-08-16 15:01:34,425][09795] Avg episode reward: [(0, '7.048')] [2024-08-16 15:01:34,425][19817] Saving new best policy, reward=7.048! [2024-08-16 15:01:34,853][19830] Updated weights for policy 0, policy_version 240 (0.0009) [2024-08-16 15:01:37,021][19830] Updated weights for policy 0, policy_version 250 (0.0008) [2024-08-16 15:01:39,262][19830] Updated weights for policy 0, policy_version 260 (0.0008) [2024-08-16 15:01:39,423][09795] Fps is (10 sec: 18431.9, 60 sec: 17749.3, 300 sec: 17749.3). Total num frames: 1064960. Throughput: 0: 4550.3. Samples: 260002. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2024-08-16 15:01:39,424][09795] Avg episode reward: [(0, '6.964')] [2024-08-16 15:01:41,457][19830] Updated weights for policy 0, policy_version 270 (0.0008) [2024-08-16 15:01:43,637][19830] Updated weights for policy 0, policy_version 280 (0.0009) [2024-08-16 15:01:44,423][09795] Fps is (10 sec: 18431.9, 60 sec: 18295.5, 300 sec: 17833.3). Total num frames: 1159168. Throughput: 0: 4537.0. Samples: 287726. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2024-08-16 15:01:44,424][09795] Avg episode reward: [(0, '9.357')] [2024-08-16 15:01:44,425][19817] Saving new best policy, reward=9.357! [2024-08-16 15:01:45,922][19830] Updated weights for policy 0, policy_version 290 (0.0009) [2024-08-16 15:01:48,081][19830] Updated weights for policy 0, policy_version 300 (0.0008) [2024-08-16 15:01:49,423][09795] Fps is (10 sec: 18432.1, 60 sec: 18295.5, 300 sec: 17846.9). Total num frames: 1249280. Throughput: 0: 4518.2. Samples: 301476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2024-08-16 15:01:49,424][09795] Avg episode reward: [(0, '7.852')] [2024-08-16 15:01:50,349][19830] Updated weights for policy 0, policy_version 310 (0.0008) [2024-08-16 15:01:52,544][19830] Updated weights for policy 0, policy_version 320 (0.0008) [2024-08-16 15:01:54,423][09795] Fps is (10 sec: 18432.0, 60 sec: 18227.2, 300 sec: 17913.2). Total num frames: 1343488. Throughput: 0: 4549.2. Samples: 329246. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:01:54,424][09795] Avg episode reward: [(0, '9.561')] [2024-08-16 15:01:54,425][19817] Saving new best policy, reward=9.561! [2024-08-16 15:01:54,789][19830] Updated weights for policy 0, policy_version 330 (0.0008) [2024-08-16 15:01:56,970][19830] Updated weights for policy 0, policy_version 340 (0.0009) [2024-08-16 15:01:59,148][19830] Updated weights for policy 0, policy_version 350 (0.0009) [2024-08-16 15:01:59,423][09795] Fps is (10 sec: 18432.0, 60 sec: 18158.9, 300 sec: 17920.0). Total num frames: 1433600. Throughput: 0: 4596.9. Samples: 357040. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:01:59,424][09795] Avg episode reward: [(0, '10.780')] [2024-08-16 15:01:59,428][19817] Saving new best policy, reward=10.780! [2024-08-16 15:02:01,442][19830] Updated weights for policy 0, policy_version 360 (0.0008) [2024-08-16 15:02:03,728][19830] Updated weights for policy 0, policy_version 370 (0.0009) [2024-08-16 15:02:04,423][09795] Fps is (10 sec: 18021.9, 60 sec: 18090.6, 300 sec: 17926.0). Total num frames: 1523712. Throughput: 0: 4618.8. Samples: 370844. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:02:04,424][09795] Avg episode reward: [(0, '10.450')] [2024-08-16 15:02:06,271][19830] Updated weights for policy 0, policy_version 380 (0.0009) [2024-08-16 15:02:08,626][19830] Updated weights for policy 0, policy_version 390 (0.0009) [2024-08-16 15:02:09,423][09795] Fps is (10 sec: 17612.7, 60 sec: 18158.9, 300 sec: 17885.9). Total num frames: 1609728. Throughput: 0: 4559.1. Samples: 396144. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2024-08-16 15:02:09,424][09795] Avg episode reward: [(0, '11.546')] [2024-08-16 15:02:09,427][19817] Saving new best policy, reward=11.546! [2024-08-16 15:02:11,006][19830] Updated weights for policy 0, policy_version 400 (0.0009) [2024-08-16 15:02:13,467][19830] Updated weights for policy 0, policy_version 410 (0.0009) [2024-08-16 15:02:14,423][09795] Fps is (10 sec: 16794.0, 60 sec: 18090.7, 300 sec: 17806.8). Total num frames: 1691648. Throughput: 0: 4514.2. Samples: 421696. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2024-08-16 15:02:14,424][09795] Avg episode reward: [(0, '13.300')] [2024-08-16 15:02:14,425][19817] Saving new best policy, reward=13.300! [2024-08-16 15:02:16,030][19830] Updated weights for policy 0, policy_version 420 (0.0010) [2024-08-16 15:02:18,673][19830] Updated weights for policy 0, policy_version 430 (0.0009) [2024-08-16 15:02:19,423][09795] Fps is (10 sec: 16384.2, 60 sec: 17954.2, 300 sec: 17735.7). Total num frames: 1773568. Throughput: 0: 4463.8. Samples: 433178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:02:19,424][09795] Avg episode reward: [(0, '10.660')] [2024-08-16 15:02:21,021][19830] Updated weights for policy 0, policy_version 440 (0.0009) [2024-08-16 15:02:23,368][19830] Updated weights for policy 0, policy_version 450 (0.0008) [2024-08-16 15:02:24,423][09795] Fps is (10 sec: 16793.6, 60 sec: 17817.6, 300 sec: 17710.3). Total num frames: 1859584. Throughput: 0: 4417.4. Samples: 458786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:02:24,424][09795] Avg episode reward: [(0, '15.219')] [2024-08-16 15:02:24,465][19817] Saving new best policy, reward=15.219! [2024-08-16 15:02:25,656][19830] Updated weights for policy 0, policy_version 460 (0.0010) [2024-08-16 15:02:28,006][19830] Updated weights for policy 0, policy_version 470 (0.0009) [2024-08-16 15:02:29,423][09795] Fps is (10 sec: 17612.6, 60 sec: 17817.6, 300 sec: 17724.5). Total num frames: 1949696. Throughput: 0: 4392.5. Samples: 485388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:02:29,424][09795] Avg episode reward: [(0, '15.121')] [2024-08-16 15:02:29,428][19817] Saving /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/checkpoint_p0/checkpoint_000000476_1949696.pth... [2024-08-16 15:02:30,305][19830] Updated weights for policy 0, policy_version 480 (0.0009) [2024-08-16 15:02:32,595][19830] Updated weights for policy 0, policy_version 490 (0.0009) [2024-08-16 15:02:34,423][09795] Fps is (10 sec: 17613.0, 60 sec: 17681.1, 300 sec: 17701.9). Total num frames: 2035712. Throughput: 0: 4380.4. Samples: 498594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:02:34,424][09795] Avg episode reward: [(0, '17.719')] [2024-08-16 15:02:34,425][19817] Saving new best policy, reward=17.719! [2024-08-16 15:02:34,965][19830] Updated weights for policy 0, policy_version 500 (0.0009) [2024-08-16 15:02:37,288][19830] Updated weights for policy 0, policy_version 510 (0.0009) [2024-08-16 15:02:39,423][09795] Fps is (10 sec: 17203.4, 60 sec: 17612.8, 300 sec: 17681.1). Total num frames: 2121728. Throughput: 0: 4347.0. Samples: 524862. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2024-08-16 15:02:39,424][09795] Avg episode reward: [(0, '16.486')] [2024-08-16 15:02:39,678][19830] Updated weights for policy 0, policy_version 520 (0.0009) [2024-08-16 15:02:41,968][19830] Updated weights for policy 0, policy_version 530 (0.0009) [2024-08-16 15:02:44,337][19830] Updated weights for policy 0, policy_version 540 (0.0009) [2024-08-16 15:02:44,423][09795] Fps is (10 sec: 17612.7, 60 sec: 17544.5, 300 sec: 17694.7). Total num frames: 2211840. Throughput: 0: 4313.9. Samples: 551164. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2024-08-16 15:02:44,424][09795] Avg episode reward: [(0, '15.403')] [2024-08-16 15:02:46,708][19830] Updated weights for policy 0, policy_version 550 (0.0009) [2024-08-16 15:02:49,383][19830] Updated weights for policy 0, policy_version 560 (0.0010) [2024-08-16 15:02:49,423][09795] Fps is (10 sec: 17203.1, 60 sec: 17408.0, 300 sec: 17644.3). Total num frames: 2293760. Throughput: 0: 4293.9. Samples: 564066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:02:49,424][09795] Avg episode reward: [(0, '15.907')] [2024-08-16 15:02:52,067][19830] Updated weights for policy 0, policy_version 570 (0.0010) [2024-08-16 15:02:54,365][19830] Updated weights for policy 0, policy_version 580 (0.0008) [2024-08-16 15:02:54,423][09795] Fps is (10 sec: 16384.0, 60 sec: 17203.2, 300 sec: 17597.6). Total num frames: 2375680. Throughput: 0: 4246.0. Samples: 587212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:02:54,424][09795] Avg episode reward: [(0, '19.519')] [2024-08-16 15:02:54,425][19817] Saving new best policy, reward=19.519! [2024-08-16 15:02:56,821][19830] Updated weights for policy 0, policy_version 590 (0.0009) [2024-08-16 15:02:59,423][09795] Fps is (10 sec: 15974.3, 60 sec: 16998.4, 300 sec: 17525.0). Total num frames: 2453504. Throughput: 0: 4218.4. Samples: 611524. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2024-08-16 15:02:59,424][09795] Avg episode reward: [(0, '18.098')] [2024-08-16 15:02:59,662][19830] Updated weights for policy 0, policy_version 600 (0.0010) [2024-08-16 15:03:02,145][19830] Updated weights for policy 0, policy_version 610 (0.0009) [2024-08-16 15:03:04,423][09795] Fps is (10 sec: 15564.5, 60 sec: 16793.6, 300 sec: 17457.4). Total num frames: 2531328. Throughput: 0: 4236.4. Samples: 623818. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2024-08-16 15:03:04,424][09795] Avg episode reward: [(0, '16.211')] [2024-08-16 15:03:04,946][19830] Updated weights for policy 0, policy_version 620 (0.0010) [2024-08-16 15:03:07,359][19830] Updated weights for policy 0, policy_version 630 (0.0010) [2024-08-16 15:03:09,423][09795] Fps is (10 sec: 15564.8, 60 sec: 16657.1, 300 sec: 17394.3). Total num frames: 2609152. Throughput: 0: 4190.5. Samples: 647360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2024-08-16 15:03:09,424][09795] Avg episode reward: [(0, '16.607')] [2024-08-16 15:03:09,991][19830] Updated weights for policy 0, policy_version 640 (0.0010) [2024-08-16 15:03:13,072][19830] Updated weights for policy 0, policy_version 650 (0.0011) [2024-08-16 15:03:14,423][09795] Fps is (10 sec: 15155.6, 60 sec: 16520.6, 300 sec: 17308.9). Total num frames: 2682880. Throughput: 0: 4073.9. Samples: 668714. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2024-08-16 15:03:14,424][09795] Avg episode reward: [(0, '18.533')] [2024-08-16 15:03:15,651][19830] Updated weights for policy 0, policy_version 660 (0.0010) [2024-08-16 15:03:18,195][19830] Updated weights for policy 0, policy_version 670 (0.0010) [2024-08-16 15:03:19,423][09795] Fps is (10 sec: 15155.2, 60 sec: 16452.2, 300 sec: 17254.4). Total num frames: 2760704. Throughput: 0: 4053.6. Samples: 681006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:03:19,424][09795] Avg episode reward: [(0, '20.754')] [2024-08-16 15:03:19,429][19817] Saving new best policy, reward=20.754! [2024-08-16 15:03:21,009][19830] Updated weights for policy 0, policy_version 680 (0.0010) [2024-08-16 15:03:23,683][19830] Updated weights for policy 0, policy_version 690 (0.0011) [2024-08-16 15:03:24,423][09795] Fps is (10 sec: 15154.9, 60 sec: 16247.4, 300 sec: 17178.4). Total num frames: 2834432. Throughput: 0: 3975.9. Samples: 703780. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2024-08-16 15:03:24,425][09795] Avg episode reward: [(0, '22.080')] [2024-08-16 15:03:24,426][19817] Saving new best policy, reward=22.080! [2024-08-16 15:03:26,418][19830] Updated weights for policy 0, policy_version 700 (0.0010) [2024-08-16 15:03:29,121][19830] Updated weights for policy 0, policy_version 710 (0.0009) [2024-08-16 15:03:29,423][09795] Fps is (10 sec: 15154.8, 60 sec: 16042.6, 300 sec: 17130.9). Total num frames: 2912256. Throughput: 0: 3893.7. Samples: 726382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:03:29,425][09795] Avg episode reward: [(0, '20.974')] [2024-08-16 15:03:31,752][19830] Updated weights for policy 0, policy_version 720 (0.0009) [2024-08-16 15:03:34,424][09795] Fps is (10 sec: 15153.4, 60 sec: 15837.5, 300 sec: 17062.6). Total num frames: 2985984. Throughput: 0: 3860.5. Samples: 737792. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2024-08-16 15:03:34,426][09795] Avg episode reward: [(0, '21.284')] [2024-08-16 15:03:34,553][19830] Updated weights for policy 0, policy_version 730 (0.0010) [2024-08-16 15:03:37,150][19830] Updated weights for policy 0, policy_version 740 (0.0010) [2024-08-16 15:03:39,423][09795] Fps is (10 sec: 15154.8, 60 sec: 15701.2, 300 sec: 17021.1). Total num frames: 3063808. Throughput: 0: 3855.2. Samples: 760696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2024-08-16 15:03:39,425][09795] Avg episode reward: [(0, '21.944')] [2024-08-16 15:03:39,844][19830] Updated weights for policy 0, policy_version 750 (0.0010) [2024-08-16 15:03:42,687][19830] Updated weights for policy 0, policy_version 760 (0.0011) [2024-08-16 15:03:44,423][09795] Fps is (10 sec: 15157.1, 60 sec: 15428.2, 300 sec: 16959.6). Total num frames: 3137536. Throughput: 0: 3809.6. Samples: 782956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2024-08-16 15:03:44,424][09795] Avg episode reward: [(0, '25.046')] [2024-08-16 15:03:44,425][19817] Saving new best policy, reward=25.046! [2024-08-16 15:03:45,540][19830] Updated weights for policy 0, policy_version 770 (0.0010) [2024-08-16 15:03:48,303][19830] Updated weights for policy 0, policy_version 780 (0.0010) [2024-08-16 15:03:49,423][09795] Fps is (10 sec: 14746.4, 60 sec: 15291.7, 300 sec: 16901.4). Total num frames: 3211264. Throughput: 0: 3763.1. Samples: 793156. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:03:49,424][09795] Avg episode reward: [(0, '23.080')] [2024-08-16 15:03:50,601][19830] Updated weights for policy 0, policy_version 790 (0.0009) [2024-08-16 15:03:52,885][19830] Updated weights for policy 0, policy_version 800 (0.0009) [2024-08-16 15:03:54,423][09795] Fps is (10 sec: 16384.0, 60 sec: 15428.2, 300 sec: 16930.1). Total num frames: 3301376. Throughput: 0: 3822.4. Samples: 819368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:03:54,424][09795] Avg episode reward: [(0, '24.139')] [2024-08-16 15:03:55,423][19830] Updated weights for policy 0, policy_version 810 (0.0010) [2024-08-16 15:03:58,170][19830] Updated weights for policy 0, policy_version 820 (0.0011) [2024-08-16 15:03:59,423][09795] Fps is (10 sec: 16793.2, 60 sec: 15428.2, 300 sec: 16896.0). Total num frames: 3379200. Throughput: 0: 3869.4. Samples: 842838. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:03:59,425][09795] Avg episode reward: [(0, '20.764')] [2024-08-16 15:04:00,629][19830] Updated weights for policy 0, policy_version 830 (0.0010) [2024-08-16 15:04:02,983][19830] Updated weights for policy 0, policy_version 840 (0.0009) [2024-08-16 15:04:04,423][09795] Fps is (10 sec: 15974.5, 60 sec: 15496.6, 300 sec: 16883.5). Total num frames: 3461120. Throughput: 0: 3880.0. Samples: 855606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:04:04,424][09795] Avg episode reward: [(0, '19.982')] [2024-08-16 15:04:05,383][19830] Updated weights for policy 0, policy_version 850 (0.0009) [2024-08-16 15:04:07,772][19830] Updated weights for policy 0, policy_version 860 (0.0009) [2024-08-16 15:04:09,423][09795] Fps is (10 sec: 16793.2, 60 sec: 15632.9, 300 sec: 16891.1). Total num frames: 3547136. Throughput: 0: 3942.0. Samples: 881170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:04:09,425][09795] Avg episode reward: [(0, '22.320')] [2024-08-16 15:04:10,261][19830] Updated weights for policy 0, policy_version 870 (0.0010) [2024-08-16 15:04:12,692][19830] Updated weights for policy 0, policy_version 880 (0.0009) [2024-08-16 15:04:14,423][09795] Fps is (10 sec: 16793.5, 60 sec: 15769.5, 300 sec: 16879.3). Total num frames: 3629056. Throughput: 0: 3994.8. Samples: 906146. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:04:14,425][09795] Avg episode reward: [(0, '21.940')] [2024-08-16 15:04:15,376][19830] Updated weights for policy 0, policy_version 890 (0.0010) [2024-08-16 15:04:18,126][19830] Updated weights for policy 0, policy_version 900 (0.0010) [2024-08-16 15:04:19,423][09795] Fps is (10 sec: 15975.2, 60 sec: 15769.6, 300 sec: 16849.5). Total num frames: 3706880. Throughput: 0: 3986.6. Samples: 917182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2024-08-16 15:04:19,424][09795] Avg episode reward: [(0, '23.776')] [2024-08-16 15:04:20,599][19830] Updated weights for policy 0, policy_version 910 (0.0010) [2024-08-16 15:04:23,316][19830] Updated weights for policy 0, policy_version 920 (0.0010) [2024-08-16 15:04:24,423][09795] Fps is (10 sec: 15565.0, 60 sec: 15837.9, 300 sec: 16820.9). Total num frames: 3784704. Throughput: 0: 3997.4. Samples: 940576. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2024-08-16 15:04:24,424][09795] Avg episode reward: [(0, '23.407')] [2024-08-16 15:04:25,887][19830] Updated weights for policy 0, policy_version 930 (0.0010) [2024-08-16 15:04:28,613][19830] Updated weights for policy 0, policy_version 940 (0.0009) [2024-08-16 15:04:29,423][09795] Fps is (10 sec: 15564.8, 60 sec: 15838.0, 300 sec: 16793.6). Total num frames: 3862528. Throughput: 0: 4019.9. Samples: 963852. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:04:29,424][09795] Avg episode reward: [(0, '19.884')] [2024-08-16 15:04:29,429][19817] Saving /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/checkpoint_p0/checkpoint_000000943_3862528.pth... [2024-08-16 15:04:31,179][19830] Updated weights for policy 0, policy_version 950 (0.0010) [2024-08-16 15:04:33,747][19830] Updated weights for policy 0, policy_version 960 (0.0010) [2024-08-16 15:04:34,423][09795] Fps is (10 sec: 15564.9, 60 sec: 15906.5, 300 sec: 16767.5). Total num frames: 3940352. Throughput: 0: 4055.8. Samples: 975668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2024-08-16 15:04:34,424][09795] Avg episode reward: [(0, '20.892')] [2024-08-16 15:04:36,281][19830] Updated weights for policy 0, policy_version 970 (0.0009) [2024-08-16 15:04:38,262][19817] Stopping Batcher_0... [2024-08-16 15:04:38,263][19817] Loop batcher_evt_loop terminating... [2024-08-16 15:04:38,263][19817] Saving /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2024-08-16 15:04:38,267][09795] Component Batcher_0 stopped! [2024-08-16 15:04:38,270][09795] Component RolloutWorker_w3 process died already! Don't wait for it. [2024-08-16 15:04:38,276][19835] Stopping RolloutWorker_w2... [2024-08-16 15:04:38,276][19836] Stopping RolloutWorker_w6... [2024-08-16 15:04:38,276][19831] Stopping RolloutWorker_w0... [2024-08-16 15:04:38,276][19836] Loop rollout_proc6_evt_loop terminating... [2024-08-16 15:04:38,277][19838] Stopping RolloutWorker_w5... [2024-08-16 15:04:38,277][19831] Loop rollout_proc0_evt_loop terminating... [2024-08-16 15:04:38,277][19835] Loop rollout_proc2_evt_loop terminating... [2024-08-16 15:04:38,277][19838] Loop rollout_proc5_evt_loop terminating... [2024-08-16 15:04:38,277][19837] Stopping RolloutWorker_w7... [2024-08-16 15:04:38,278][19837] Loop rollout_proc7_evt_loop terminating... [2024-08-16 15:04:38,276][09795] Component RolloutWorker_w2 stopped! [2024-08-16 15:04:38,280][19834] Stopping RolloutWorker_w4... [2024-08-16 15:04:38,282][19833] Stopping RolloutWorker_w1... [2024-08-16 15:04:38,282][19834] Loop rollout_proc4_evt_loop terminating... [2024-08-16 15:04:38,282][19833] Loop rollout_proc1_evt_loop terminating... [2024-08-16 15:04:38,280][09795] Component RolloutWorker_w6 stopped! [2024-08-16 15:04:38,284][19830] Weights refcount: 2 0 [2024-08-16 15:04:38,284][09795] Component RolloutWorker_w0 stopped! [2024-08-16 15:04:38,285][19830] Stopping InferenceWorker_p0-w0... [2024-08-16 15:04:38,286][19830] Loop inference_proc0-0_evt_loop terminating... [2024-08-16 15:04:38,285][09795] Component RolloutWorker_w5 stopped! [2024-08-16 15:04:38,288][09795] Component RolloutWorker_w7 stopped! [2024-08-16 15:04:38,290][09795] Component RolloutWorker_w4 stopped! [2024-08-16 15:04:38,293][09795] Component RolloutWorker_w1 stopped! [2024-08-16 15:04:38,295][09795] Component InferenceWorker_p0-w0 stopped! [2024-08-16 15:04:38,333][19817] Removing /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/checkpoint_p0/checkpoint_000000476_1949696.pth [2024-08-16 15:04:38,341][19817] Saving /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2024-08-16 15:04:38,422][19817] Stopping LearnerWorker_p0... [2024-08-16 15:04:38,423][19817] Loop learner_proc0_evt_loop terminating... [2024-08-16 15:04:38,423][09795] Component LearnerWorker_p0 stopped! [2024-08-16 15:04:38,425][09795] Waiting for process learner_proc0 to stop... [2024-08-16 15:04:39,321][09795] Waiting for process inference_proc0-0 to join... [2024-08-16 15:04:39,322][09795] Waiting for process rollout_proc0 to join... [2024-08-16 15:04:39,322][09795] Waiting for process rollout_proc1 to join... [2024-08-16 15:04:39,323][09795] Waiting for process rollout_proc2 to join... [2024-08-16 15:04:39,323][09795] Waiting for process rollout_proc3 to join... [2024-08-16 15:04:39,324][09795] Waiting for process rollout_proc4 to join... [2024-08-16 15:04:39,324][09795] Waiting for process rollout_proc5 to join... [2024-08-16 15:04:39,324][09795] Waiting for process rollout_proc6 to join... [2024-08-16 15:04:39,325][09795] Waiting for process rollout_proc7 to join... [2024-08-16 15:04:39,325][09795] Batcher 0 profile tree view: batching: 12.0359, releasing_batches: 0.0294 [2024-08-16 15:04:39,325][09795] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0000 wait_policy_total: 3.0779 update_model: 3.7110 weight_update: 0.0010 one_step: 0.0030 handle_policy_step: 222.3845 deserialize: 8.3652, stack: 1.3086, obs_to_device_normalize: 54.2507, forward: 114.7561, send_messages: 10.5792 prepare_outputs: 24.1736 to_cpu: 14.5914 [2024-08-16 15:04:39,326][09795] Learner 0 profile tree view: misc: 0.0043, prepare_batch: 12.4152 train: 39.1134 epoch_init: 0.0042, minibatch_init: 0.0059, losses_postprocess: 0.2538, kl_divergence: 0.2299, after_optimizer: 19.6184 calculate_losses: 13.0556 losses_init: 0.0022, forward_head: 0.8085, bptt_initial: 9.7676, tail: 0.5032, advantages_returns: 0.1281, losses: 0.8480 bptt: 0.8324 bptt_forward_core: 0.7901 update: 5.5965 clip: 0.6016 [2024-08-16 15:04:39,326][09795] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.1335, enqueue_policy_requests: 8.5325, env_step: 100.0188, overhead: 11.2157, complete_rollouts: 0.2483 save_policy_outputs: 8.6007 split_output_tensors: 4.1162 [2024-08-16 15:04:39,326][09795] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.1288, enqueue_policy_requests: 8.5215, env_step: 100.0412, overhead: 11.4728, complete_rollouts: 0.2583 save_policy_outputs: 8.5721 split_output_tensors: 4.0914 [2024-08-16 15:04:39,327][09795] Loop Runner_EvtLoop terminating... [2024-08-16 15:04:39,327][09795] Runner profile tree view: main_loop: 245.5234 [2024-08-16 15:04:39,327][09795] Collected {0: 4005888}, FPS: 16315.7 [2024-08-16 15:07:42,139][09795] Loading existing experiment configuration from /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/config.json [2024-08-16 15:07:42,140][09795] Overriding arg 'num_workers' with value 1 passed from command line [2024-08-16 15:07:42,140][09795] Adding new argument 'no_render'=True that is not in the saved config file! [2024-08-16 15:07:42,141][09795] Adding new argument 'save_video'=True that is not in the saved config file! [2024-08-16 15:07:42,141][09795] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2024-08-16 15:07:42,141][09795] Adding new argument 'video_name'=None that is not in the saved config file! [2024-08-16 15:07:42,141][09795] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2024-08-16 15:07:42,142][09795] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2024-08-16 15:07:42,142][09795] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2024-08-16 15:07:42,142][09795] Adding new argument 'hf_repository'=None that is not in the saved config file! [2024-08-16 15:07:42,143][09795] Adding new argument 'policy_index'=0 that is not in the saved config file! [2024-08-16 15:07:42,143][09795] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2024-08-16 15:07:42,144][09795] Adding new argument 'train_script'=None that is not in the saved config file! [2024-08-16 15:07:42,144][09795] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2024-08-16 15:07:42,144][09795] Using frameskip 1 and render_action_repeat=4 for evaluation [2024-08-16 15:07:42,162][09795] Doom resolution: 160x120, resize resolution: (128, 72) [2024-08-16 15:07:42,164][09795] RunningMeanStd input shape: (3, 72, 128) [2024-08-16 15:07:42,165][09795] RunningMeanStd input shape: (1,) [2024-08-16 15:07:42,175][09795] ConvEncoder: input_channels=3 [2024-08-16 15:07:42,252][09795] Conv encoder output size: 512 [2024-08-16 15:07:42,253][09795] Policy head output size: 512 [2024-08-16 15:07:43,859][09795] Loading state from checkpoint /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2024-08-16 15:07:44,321][09795] Num frames 100... [2024-08-16 15:07:44,402][09795] Num frames 200... [2024-08-16 15:07:44,485][09795] Num frames 300... [2024-08-16 15:07:44,564][09795] Num frames 400... [2024-08-16 15:07:44,642][09795] Num frames 500... [2024-08-16 15:07:44,720][09795] Num frames 600... [2024-08-16 15:07:44,820][09795] Num frames 700... [2024-08-16 15:07:44,906][09795] Num frames 800... [2024-08-16 15:07:44,991][09795] Num frames 900... [2024-08-16 15:07:45,073][09795] Num frames 1000... [2024-08-16 15:07:45,157][09795] Num frames 1100... [2024-08-16 15:07:45,242][09795] Num frames 1200... [2024-08-16 15:07:45,343][09795] Avg episode rewards: #0: 26.480, true rewards: #0: 12.480 [2024-08-16 15:07:45,344][09795] Avg episode reward: 26.480, avg true_objective: 12.480 [2024-08-16 15:07:45,388][09795] Num frames 1300... [2024-08-16 15:07:45,468][09795] Num frames 1400... [2024-08-16 15:07:45,549][09795] Num frames 1500... [2024-08-16 15:07:45,630][09795] Num frames 1600... [2024-08-16 15:07:45,707][09795] Num frames 1700... [2024-08-16 15:07:45,786][09795] Num frames 1800... [2024-08-16 15:07:45,866][09795] Num frames 1900... [2024-08-16 15:07:45,948][09795] Num frames 2000... [2024-08-16 15:07:46,029][09795] Num frames 2100... [2024-08-16 15:07:46,109][09795] Num frames 2200... [2024-08-16 15:07:46,185][09795] Num frames 2300... [2024-08-16 15:07:46,264][09795] Num frames 2400... [2024-08-16 15:07:46,345][09795] Num frames 2500... [2024-08-16 15:07:46,426][09795] Num frames 2600... [2024-08-16 15:07:46,507][09795] Num frames 2700... [2024-08-16 15:07:46,586][09795] Num frames 2800... [2024-08-16 15:07:46,665][09795] Num frames 2900... [2024-08-16 15:07:46,756][09795] Avg episode rewards: #0: 30.720, true rewards: #0: 14.720 [2024-08-16 15:07:46,757][09795] Avg episode reward: 30.720, avg true_objective: 14.720 [2024-08-16 15:07:46,807][09795] Num frames 3000... [2024-08-16 15:07:46,885][09795] Num frames 3100... [2024-08-16 15:07:46,963][09795] Num frames 3200... [2024-08-16 15:07:47,037][09795] Num frames 3300... [2024-08-16 15:07:47,115][09795] Num frames 3400... [2024-08-16 15:07:47,193][09795] Num frames 3500... [2024-08-16 15:07:47,271][09795] Num frames 3600... [2024-08-16 15:07:47,336][09795] Avg episode rewards: #0: 24.053, true rewards: #0: 12.053 [2024-08-16 15:07:47,337][09795] Avg episode reward: 24.053, avg true_objective: 12.053 [2024-08-16 15:07:47,400][09795] Num frames 3700... [2024-08-16 15:07:47,476][09795] Num frames 3800... [2024-08-16 15:07:47,551][09795] Num frames 3900... [2024-08-16 15:07:47,632][09795] Num frames 4000... [2024-08-16 15:07:47,708][09795] Num frames 4100... [2024-08-16 15:07:47,783][09795] Num frames 4200... [2024-08-16 15:07:47,859][09795] Num frames 4300... [2024-08-16 15:07:47,938][09795] Num frames 4400... [2024-08-16 15:07:48,017][09795] Num frames 4500... [2024-08-16 15:07:48,094][09795] Num frames 4600... [2024-08-16 15:07:48,168][09795] Num frames 4700... [2024-08-16 15:07:48,244][09795] Num frames 4800... [2024-08-16 15:07:48,319][09795] Num frames 4900... [2024-08-16 15:07:48,395][09795] Num frames 5000... [2024-08-16 15:07:48,515][09795] Avg episode rewards: #0: 27.220, true rewards: #0: 12.720 [2024-08-16 15:07:48,516][09795] Avg episode reward: 27.220, avg true_objective: 12.720 [2024-08-16 15:07:48,526][09795] Num frames 5100... [2024-08-16 15:07:48,600][09795] Num frames 5200... [2024-08-16 15:07:48,675][09795] Num frames 5300... [2024-08-16 15:07:48,751][09795] Num frames 5400... [2024-08-16 15:07:48,830][09795] Num frames 5500... [2024-08-16 15:07:48,911][09795] Num frames 5600... [2024-08-16 15:07:48,988][09795] Num frames 5700... [2024-08-16 15:07:49,067][09795] Num frames 5800... [2024-08-16 15:07:49,142][09795] Num frames 5900... [2024-08-16 15:07:49,221][09795] Num frames 6000... [2024-08-16 15:07:49,300][09795] Num frames 6100... [2024-08-16 15:07:49,380][09795] Num frames 6200... [2024-08-16 15:07:49,459][09795] Num frames 6300... [2024-08-16 15:07:49,541][09795] Num frames 6400... [2024-08-16 15:07:49,621][09795] Num frames 6500... [2024-08-16 15:07:49,700][09795] Num frames 6600... [2024-08-16 15:07:49,779][09795] Num frames 6700... [2024-08-16 15:07:49,860][09795] Num frames 6800... [2024-08-16 15:07:49,939][09795] Num frames 6900... [2024-08-16 15:07:50,018][09795] Num frames 7000... [2024-08-16 15:07:50,095][09795] Num frames 7100... [2024-08-16 15:07:50,233][09795] Avg episode rewards: #0: 33.176, true rewards: #0: 14.376 [2024-08-16 15:07:50,234][09795] Avg episode reward: 33.176, avg true_objective: 14.376 [2024-08-16 15:07:50,245][09795] Num frames 7200... [2024-08-16 15:07:50,332][09795] Num frames 7300... [2024-08-16 15:07:50,415][09795] Num frames 7400... [2024-08-16 15:07:50,501][09795] Num frames 7500... [2024-08-16 15:07:50,594][09795] Num frames 7600... [2024-08-16 15:07:50,651][09795] Avg episode rewards: #0: 28.673, true rewards: #0: 12.673 [2024-08-16 15:07:50,651][09795] Avg episode reward: 28.673, avg true_objective: 12.673 [2024-08-16 15:07:50,736][09795] Num frames 7700... [2024-08-16 15:07:50,826][09795] Num frames 7800... [2024-08-16 15:07:50,904][09795] Num frames 7900... [2024-08-16 15:07:50,997][09795] Num frames 8000... [2024-08-16 15:07:51,078][09795] Num frames 8100... [2024-08-16 15:07:51,162][09795] Num frames 8200... [2024-08-16 15:07:51,248][09795] Num frames 8300... [2024-08-16 15:07:51,328][09795] Num frames 8400... [2024-08-16 15:07:51,413][09795] Num frames 8500... [2024-08-16 15:07:51,464][09795] Avg episode rewards: #0: 27.000, true rewards: #0: 12.143 [2024-08-16 15:07:51,465][09795] Avg episode reward: 27.000, avg true_objective: 12.143 [2024-08-16 15:07:51,555][09795] Num frames 8600... [2024-08-16 15:07:51,644][09795] Num frames 8700... [2024-08-16 15:07:51,723][09795] Num frames 8800... [2024-08-16 15:07:51,802][09795] Num frames 8900... [2024-08-16 15:07:51,885][09795] Num frames 9000... [2024-08-16 15:07:51,965][09795] Num frames 9100... [2024-08-16 15:07:52,080][09795] Avg episode rewards: #0: 25.466, true rewards: #0: 11.466 [2024-08-16 15:07:52,080][09795] Avg episode reward: 25.466, avg true_objective: 11.466 [2024-08-16 15:07:52,103][09795] Num frames 9200... [2024-08-16 15:07:52,183][09795] Num frames 9300... [2024-08-16 15:07:52,263][09795] Num frames 9400... [2024-08-16 15:07:52,388][09795] Avg episode rewards: #0: 23.210, true rewards: #0: 10.543 [2024-08-16 15:07:52,389][09795] Avg episode reward: 23.210, avg true_objective: 10.543 [2024-08-16 15:07:52,398][09795] Num frames 9500... [2024-08-16 15:07:52,472][09795] Num frames 9600... [2024-08-16 15:07:52,549][09795] Num frames 9700... [2024-08-16 15:07:52,626][09795] Num frames 9800... [2024-08-16 15:07:52,708][09795] Num frames 9900... [2024-08-16 15:07:52,794][09795] Num frames 10000... [2024-08-16 15:07:52,883][09795] Num frames 10100... [2024-08-16 15:07:52,966][09795] Num frames 10200... [2024-08-16 15:07:53,040][09795] Num frames 10300... [2024-08-16 15:07:53,118][09795] Num frames 10400... [2024-08-16 15:07:53,200][09795] Num frames 10500... [2024-08-16 15:07:53,275][09795] Num frames 10600... [2024-08-16 15:07:53,349][09795] Num frames 10700... [2024-08-16 15:07:53,426][09795] Num frames 10800... [2024-08-16 15:07:53,502][09795] Num frames 10900... [2024-08-16 15:07:53,580][09795] Num frames 11000... [2024-08-16 15:07:53,659][09795] Num frames 11100... [2024-08-16 15:07:53,735][09795] Num frames 11200... [2024-08-16 15:07:53,813][09795] Num frames 11300... [2024-08-16 15:07:53,890][09795] Num frames 11400... [2024-08-16 15:07:53,964][09795] Num frames 11500... [2024-08-16 15:07:54,024][09795] Avg episode rewards: #0: 25.908, true rewards: #0: 11.508 [2024-08-16 15:07:54,024][09795] Avg episode reward: 25.908, avg true_objective: 11.508 [2024-08-16 15:08:10,415][09795] Replay video saved to /media/nguyen-duc-huy/E/Code/Deep_RL/train_dir/default_experiment/replay.mp4!