diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,1135 @@ +[2024-08-16 11:17:37,617][00349] Saving configuration to /content/train_dir/default_experiment/config.json... +[2024-08-16 11:17:37,622][00349] Rollout worker 0 uses device cpu +[2024-08-16 11:17:37,623][00349] Rollout worker 1 uses device cpu +[2024-08-16 11:17:37,624][00349] Rollout worker 2 uses device cpu +[2024-08-16 11:17:37,625][00349] Rollout worker 3 uses device cpu +[2024-08-16 11:17:37,627][00349] Rollout worker 4 uses device cpu +[2024-08-16 11:17:37,628][00349] Rollout worker 5 uses device cpu +[2024-08-16 11:17:37,630][00349] Rollout worker 6 uses device cpu +[2024-08-16 11:17:37,631][00349] Rollout worker 7 uses device cpu +[2024-08-16 11:17:37,786][00349] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-08-16 11:17:37,787][00349] InferenceWorker_p0-w0: min num requests: 2 +[2024-08-16 11:17:37,820][00349] Starting all processes... +[2024-08-16 11:17:37,822][00349] Starting process learner_proc0 +[2024-08-16 11:17:39,379][00349] Starting all processes... +[2024-08-16 11:17:39,389][00349] Starting process inference_proc0-0 +[2024-08-16 11:17:39,389][00349] Starting process rollout_proc0 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc1 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc2 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc3 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc4 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc5 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc6 +[2024-08-16 11:17:39,391][00349] Starting process rollout_proc7 +[2024-08-16 11:17:54,086][02922] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-08-16 11:17:54,096][02922] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2024-08-16 11:17:54,238][02922] Num visible devices: 1 +[2024-08-16 11:17:54,301][02922] Starting seed is not provided +[2024-08-16 11:17:54,302][02922] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-08-16 11:17:54,303][02922] Initializing actor-critic model on device cuda:0 +[2024-08-16 11:17:54,304][02922] RunningMeanStd input shape: (3, 72, 128) +[2024-08-16 11:17:54,307][02922] RunningMeanStd input shape: (1,) +[2024-08-16 11:17:54,498][02922] ConvEncoder: input_channels=3 +[2024-08-16 11:17:55,152][02938] Worker 1 uses CPU cores [1] +[2024-08-16 11:17:55,258][02943] Worker 7 uses CPU cores [1] +[2024-08-16 11:17:55,358][02941] Worker 6 uses CPU cores [0] +[2024-08-16 11:17:55,408][02939] Worker 3 uses CPU cores [1] +[2024-08-16 11:17:55,420][02936] Worker 0 uses CPU cores [0] +[2024-08-16 11:17:55,459][02937] Worker 2 uses CPU cores [0] +[2024-08-16 11:17:55,464][02935] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-08-16 11:17:55,464][02935] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2024-08-16 11:17:55,491][02935] Num visible devices: 1 +[2024-08-16 11:17:55,512][02940] Worker 5 uses CPU cores [1] +[2024-08-16 11:17:55,552][02942] Worker 4 uses CPU cores [0] +[2024-08-16 11:17:55,561][02922] Conv encoder output size: 512 +[2024-08-16 11:17:55,562][02922] Policy head output size: 512 +[2024-08-16 11:17:55,615][02922] Created Actor Critic model with architecture: +[2024-08-16 11:17:55,615][02922] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2024-08-16 11:17:55,899][02922] Using optimizer +[2024-08-16 11:17:56,619][02922] No checkpoints found +[2024-08-16 11:17:56,619][02922] Did not load from checkpoint, starting from scratch! +[2024-08-16 11:17:56,619][02922] Initialized policy 0 weights for model version 0 +[2024-08-16 11:17:56,622][02922] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-08-16 11:17:56,630][02922] LearnerWorker_p0 finished initialization! +[2024-08-16 11:17:56,751][02935] RunningMeanStd input shape: (3, 72, 128) +[2024-08-16 11:17:56,753][02935] RunningMeanStd input shape: (1,) +[2024-08-16 11:17:56,765][02935] ConvEncoder: input_channels=3 +[2024-08-16 11:17:56,878][02935] Conv encoder output size: 512 +[2024-08-16 11:17:56,878][02935] Policy head output size: 512 +[2024-08-16 11:17:56,932][00349] Inference worker 0-0 is ready! +[2024-08-16 11:17:56,934][00349] All inference workers are ready! Signal rollout workers to start! +[2024-08-16 11:17:57,214][02939] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,218][02942] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,235][02943] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,246][02936] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,248][02937] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,252][02938] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,257][02941] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,248][02940] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:17:57,778][00349] Heartbeat connected on Batcher_0 +[2024-08-16 11:17:57,782][00349] Heartbeat connected on LearnerWorker_p0 +[2024-08-16 11:17:57,815][00349] Heartbeat connected on InferenceWorker_p0-w0 +[2024-08-16 11:17:57,846][00349] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-08-16 11:17:58,606][02937] Decorrelating experience for 0 frames... +[2024-08-16 11:17:58,606][02942] Decorrelating experience for 0 frames... +[2024-08-16 11:17:58,606][02939] Decorrelating experience for 0 frames... +[2024-08-16 11:17:58,606][02938] Decorrelating experience for 0 frames... +[2024-08-16 11:17:58,609][02943] Decorrelating experience for 0 frames... +[2024-08-16 11:17:59,000][02938] Decorrelating experience for 32 frames... +[2024-08-16 11:17:59,480][02938] Decorrelating experience for 64 frames... +[2024-08-16 11:17:59,949][02938] Decorrelating experience for 96 frames... +[2024-08-16 11:18:00,036][00349] Heartbeat connected on RolloutWorker_w1 +[2024-08-16 11:18:00,110][02937] Decorrelating experience for 32 frames... +[2024-08-16 11:18:00,116][02942] Decorrelating experience for 32 frames... +[2024-08-16 11:18:00,121][02941] Decorrelating experience for 0 frames... +[2024-08-16 11:18:00,129][02936] Decorrelating experience for 0 frames... +[2024-08-16 11:18:01,266][02939] Decorrelating experience for 32 frames... +[2024-08-16 11:18:01,269][02940] Decorrelating experience for 0 frames... +[2024-08-16 11:18:01,695][02936] Decorrelating experience for 32 frames... +[2024-08-16 11:18:01,745][02941] Decorrelating experience for 32 frames... +[2024-08-16 11:18:02,606][02942] Decorrelating experience for 64 frames... +[2024-08-16 11:18:02,843][00349] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-08-16 11:18:02,846][00349] Avg episode reward: [(0, '1.920')] +[2024-08-16 11:18:03,080][02941] Decorrelating experience for 64 frames... +[2024-08-16 11:18:03,112][02940] Decorrelating experience for 32 frames... +[2024-08-16 11:18:03,133][02943] Decorrelating experience for 32 frames... +[2024-08-16 11:18:03,718][02939] Decorrelating experience for 64 frames... +[2024-08-16 11:18:04,348][02936] Decorrelating experience for 64 frames... +[2024-08-16 11:18:04,423][02941] Decorrelating experience for 96 frames... +[2024-08-16 11:18:04,488][02937] Decorrelating experience for 64 frames... +[2024-08-16 11:18:04,675][00349] Heartbeat connected on RolloutWorker_w6 +[2024-08-16 11:18:05,352][02942] Decorrelating experience for 96 frames... +[2024-08-16 11:18:06,512][00349] Heartbeat connected on RolloutWorker_w4 +[2024-08-16 11:18:06,640][02940] Decorrelating experience for 64 frames... +[2024-08-16 11:18:06,683][02943] Decorrelating experience for 64 frames... +[2024-08-16 11:18:06,728][02939] Decorrelating experience for 96 frames... +[2024-08-16 11:18:07,493][00349] Heartbeat connected on RolloutWorker_w3 +[2024-08-16 11:18:07,843][00349] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 116.4. Samples: 1164. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-08-16 11:18:07,848][00349] Avg episode reward: [(0, '2.989')] +[2024-08-16 11:18:07,921][02936] Decorrelating experience for 96 frames... +[2024-08-16 11:18:08,819][00349] Heartbeat connected on RolloutWorker_w0 +[2024-08-16 11:18:10,776][02922] Signal inference workers to stop experience collection... +[2024-08-16 11:18:10,817][02935] InferenceWorker_p0-w0: stopping experience collection +[2024-08-16 11:18:11,747][02943] Decorrelating experience for 96 frames... +[2024-08-16 11:18:11,757][02940] Decorrelating experience for 96 frames... +[2024-08-16 11:18:11,866][02937] Decorrelating experience for 96 frames... +[2024-08-16 11:18:12,177][00349] Heartbeat connected on RolloutWorker_w2 +[2024-08-16 11:18:12,310][00349] Heartbeat connected on RolloutWorker_w7 +[2024-08-16 11:18:12,315][00349] Heartbeat connected on RolloutWorker_w5 +[2024-08-16 11:18:12,437][02922] Signal inference workers to resume experience collection... +[2024-08-16 11:18:12,438][02935] InferenceWorker_p0-w0: resuming experience collection +[2024-08-16 11:18:12,843][00349] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 159.0. Samples: 2384. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2024-08-16 11:18:12,848][00349] Avg episode reward: [(0, '3.075')] +[2024-08-16 11:18:17,843][00349] Fps is (10 sec: 2867.3, 60 sec: 1433.8, 300 sec: 1433.8). Total num frames: 28672. Throughput: 0: 227.6. Samples: 4552. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:18:17,849][00349] Avg episode reward: [(0, '3.799')] +[2024-08-16 11:18:21,010][02935] Updated weights for policy 0, policy_version 10 (0.0028) +[2024-08-16 11:18:22,843][00349] Fps is (10 sec: 4096.0, 60 sec: 1802.4, 300 sec: 1802.4). Total num frames: 45056. Throughput: 0: 426.4. Samples: 10658. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:18:22,851][00349] Avg episode reward: [(0, '4.220')] +[2024-08-16 11:18:27,843][00349] Fps is (10 sec: 3276.8, 60 sec: 2048.2, 300 sec: 2048.2). Total num frames: 61440. Throughput: 0: 518.2. Samples: 15544. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:18:27,849][00349] Avg episode reward: [(0, '4.380')] +[2024-08-16 11:18:32,843][00349] Fps is (10 sec: 3276.8, 60 sec: 2223.7, 300 sec: 2223.7). Total num frames: 77824. Throughput: 0: 490.9. Samples: 17180. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:18:32,845][00349] Avg episode reward: [(0, '4.450')] +[2024-08-16 11:18:33,510][02935] Updated weights for policy 0, policy_version 20 (0.0037) +[2024-08-16 11:18:37,844][00349] Fps is (10 sec: 3685.9, 60 sec: 2457.7, 300 sec: 2457.7). Total num frames: 98304. Throughput: 0: 596.6. Samples: 23862. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:18:37,847][00349] Avg episode reward: [(0, '4.550')] +[2024-08-16 11:18:42,843][00349] Fps is (10 sec: 4096.0, 60 sec: 2639.8, 300 sec: 2639.8). Total num frames: 118784. Throughput: 0: 663.8. Samples: 29870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:18:42,845][00349] Avg episode reward: [(0, '4.545')] +[2024-08-16 11:18:42,857][02922] Saving new best policy, reward=4.545! +[2024-08-16 11:18:44,422][02935] Updated weights for policy 0, policy_version 30 (0.0031) +[2024-08-16 11:18:47,843][00349] Fps is (10 sec: 3277.3, 60 sec: 2621.6, 300 sec: 2621.6). Total num frames: 131072. Throughput: 0: 704.3. Samples: 31694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:18:47,846][00349] Avg episode reward: [(0, '4.500')] +[2024-08-16 11:18:52,845][00349] Fps is (10 sec: 3276.1, 60 sec: 2755.5, 300 sec: 2755.5). Total num frames: 151552. Throughput: 0: 796.9. Samples: 37026. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:18:52,856][00349] Avg episode reward: [(0, '4.303')] +[2024-08-16 11:18:55,424][02935] Updated weights for policy 0, policy_version 40 (0.0039) +[2024-08-16 11:18:57,843][00349] Fps is (10 sec: 4096.0, 60 sec: 2867.3, 300 sec: 2867.3). Total num frames: 172032. Throughput: 0: 912.2. Samples: 43432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:18:57,845][00349] Avg episode reward: [(0, '4.342')] +[2024-08-16 11:19:02,843][00349] Fps is (10 sec: 3277.5, 60 sec: 3072.0, 300 sec: 2835.8). Total num frames: 184320. Throughput: 0: 910.8. Samples: 45536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:19:02,849][00349] Avg episode reward: [(0, '4.424')] +[2024-08-16 11:19:07,613][02935] Updated weights for policy 0, policy_version 50 (0.0033) +[2024-08-16 11:19:07,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2925.8). Total num frames: 204800. Throughput: 0: 876.3. Samples: 50090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:19:07,850][00349] Avg episode reward: [(0, '4.540')] +[2024-08-16 11:19:12,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3003.9). Total num frames: 225280. Throughput: 0: 916.9. Samples: 56806. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:19:12,846][00349] Avg episode reward: [(0, '4.370')] +[2024-08-16 11:19:17,846][00349] Fps is (10 sec: 3685.3, 60 sec: 3549.7, 300 sec: 3020.8). Total num frames: 241664. Throughput: 0: 951.9. Samples: 60018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:19:17,851][00349] Avg episode reward: [(0, '4.222')] +[2024-08-16 11:19:18,162][02935] Updated weights for policy 0, policy_version 60 (0.0018) +[2024-08-16 11:19:22,845][00349] Fps is (10 sec: 3276.3, 60 sec: 3549.8, 300 sec: 3035.9). Total num frames: 258048. Throughput: 0: 893.6. Samples: 64076. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:19:22,852][00349] Avg episode reward: [(0, '4.403')] +[2024-08-16 11:19:27,843][00349] Fps is (10 sec: 4097.2, 60 sec: 3686.4, 300 sec: 3140.4). Total num frames: 282624. Throughput: 0: 898.6. Samples: 70308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:19:27,845][00349] Avg episode reward: [(0, '4.640')] +[2024-08-16 11:19:27,852][02922] Saving new best policy, reward=4.640! +[2024-08-16 11:19:28,544][02935] Updated weights for policy 0, policy_version 70 (0.0046) +[2024-08-16 11:19:32,843][00349] Fps is (10 sec: 4506.3, 60 sec: 3754.7, 300 sec: 3190.7). Total num frames: 303104. Throughput: 0: 932.3. Samples: 73646. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2024-08-16 11:19:32,854][00349] Avg episode reward: [(0, '4.479')] +[2024-08-16 11:19:32,861][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000074_303104.pth... +[2024-08-16 11:19:37,845][00349] Fps is (10 sec: 3276.3, 60 sec: 3618.1, 300 sec: 3154.0). Total num frames: 315392. Throughput: 0: 925.9. Samples: 78690. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:19:37,848][00349] Avg episode reward: [(0, '4.420')] +[2024-08-16 11:19:40,484][02935] Updated weights for policy 0, policy_version 80 (0.0055) +[2024-08-16 11:19:42,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3198.9). Total num frames: 335872. Throughput: 0: 901.0. Samples: 83978. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:19:42,846][00349] Avg episode reward: [(0, '4.433')] +[2024-08-16 11:19:47,843][00349] Fps is (10 sec: 4096.7, 60 sec: 3754.7, 300 sec: 3239.7). Total num frames: 356352. Throughput: 0: 929.8. Samples: 87376. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:19:47,848][00349] Avg episode reward: [(0, '4.502')] +[2024-08-16 11:19:49,731][02935] Updated weights for policy 0, policy_version 90 (0.0021) +[2024-08-16 11:19:52,845][00349] Fps is (10 sec: 4095.2, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 376832. Throughput: 0: 968.4. Samples: 93670. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:19:52,848][00349] Avg episode reward: [(0, '4.459')] +[2024-08-16 11:19:57,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3242.7). Total num frames: 389120. Throughput: 0: 910.2. Samples: 97766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:19:57,847][00349] Avg episode reward: [(0, '4.744')] +[2024-08-16 11:19:57,849][02922] Saving new best policy, reward=4.744! +[2024-08-16 11:20:01,740][02935] Updated weights for policy 0, policy_version 100 (0.0027) +[2024-08-16 11:20:02,843][00349] Fps is (10 sec: 3687.1, 60 sec: 3822.9, 300 sec: 3309.6). Total num frames: 413696. Throughput: 0: 907.8. Samples: 100866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:20:02,847][00349] Avg episode reward: [(0, '4.693')] +[2024-08-16 11:20:07,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3339.9). Total num frames: 434176. Throughput: 0: 969.3. Samples: 107694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:20:07,845][00349] Avg episode reward: [(0, '4.760')] +[2024-08-16 11:20:07,850][02922] Saving new best policy, reward=4.760! +[2024-08-16 11:20:12,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3307.2). Total num frames: 446464. Throughput: 0: 933.5. Samples: 112314. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:20:12,846][00349] Avg episode reward: [(0, '4.743')] +[2024-08-16 11:20:12,987][02935] Updated weights for policy 0, policy_version 110 (0.0047) +[2024-08-16 11:20:17,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3754.9, 300 sec: 3335.4). Total num frames: 466944. Throughput: 0: 908.1. Samples: 114510. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:20:17,852][00349] Avg episode reward: [(0, '4.722')] +[2024-08-16 11:20:22,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3361.6). Total num frames: 487424. Throughput: 0: 949.6. Samples: 121420. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:20:22,845][00349] Avg episode reward: [(0, '4.814')] +[2024-08-16 11:20:22,871][02922] Saving new best policy, reward=4.814! +[2024-08-16 11:20:22,880][02935] Updated weights for policy 0, policy_version 120 (0.0044) +[2024-08-16 11:20:27,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3386.1). Total num frames: 507904. Throughput: 0: 960.7. Samples: 127208. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-08-16 11:20:27,846][00349] Avg episode reward: [(0, '4.984')] +[2024-08-16 11:20:27,848][02922] Saving new best policy, reward=4.984! +[2024-08-16 11:20:32,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3356.1). Total num frames: 520192. Throughput: 0: 928.6. Samples: 129164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:20:32,846][00349] Avg episode reward: [(0, '5.004')] +[2024-08-16 11:20:32,857][02922] Saving new best policy, reward=5.004! +[2024-08-16 11:20:35,838][02935] Updated weights for policy 0, policy_version 130 (0.0042) +[2024-08-16 11:20:37,845][00349] Fps is (10 sec: 2866.6, 60 sec: 3686.4, 300 sec: 3353.6). Total num frames: 536576. Throughput: 0: 891.2. Samples: 133772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:20:37,850][00349] Avg episode reward: [(0, '5.292')] +[2024-08-16 11:20:37,854][02922] Saving new best policy, reward=5.292! +[2024-08-16 11:20:42,843][00349] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3351.3). Total num frames: 552960. Throughput: 0: 904.2. Samples: 138454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:20:42,857][00349] Avg episode reward: [(0, '5.336')] +[2024-08-16 11:20:42,865][02922] Saving new best policy, reward=5.336! +[2024-08-16 11:20:47,843][00349] Fps is (10 sec: 2867.7, 60 sec: 3481.6, 300 sec: 3325.0). Total num frames: 565248. Throughput: 0: 882.4. Samples: 140572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:20:47,849][00349] Avg episode reward: [(0, '5.348')] +[2024-08-16 11:20:47,851][02922] Saving new best policy, reward=5.348! +[2024-08-16 11:20:50,496][02935] Updated weights for policy 0, policy_version 140 (0.0032) +[2024-08-16 11:20:52,843][00349] Fps is (10 sec: 2867.3, 60 sec: 3413.4, 300 sec: 3323.7). Total num frames: 581632. Throughput: 0: 814.4. Samples: 144342. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:20:52,849][00349] Avg episode reward: [(0, '5.162')] +[2024-08-16 11:20:57,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3345.1). Total num frames: 602112. Throughput: 0: 858.2. Samples: 150934. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:20:57,846][00349] Avg episode reward: [(0, '5.212')] +[2024-08-16 11:20:59,925][02935] Updated weights for policy 0, policy_version 150 (0.0021) +[2024-08-16 11:21:02,843][00349] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3343.3). Total num frames: 618496. Throughput: 0: 882.0. Samples: 154202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:21:02,845][00349] Avg episode reward: [(0, '5.200')] +[2024-08-16 11:21:07,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3341.5). Total num frames: 634880. Throughput: 0: 816.4. Samples: 158156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:21:07,850][00349] Avg episode reward: [(0, '5.315')] +[2024-08-16 11:21:12,145][02935] Updated weights for policy 0, policy_version 160 (0.0020) +[2024-08-16 11:21:12,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3360.9). Total num frames: 655360. Throughput: 0: 824.6. Samples: 164316. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:21:12,849][00349] Avg episode reward: [(0, '5.680')] +[2024-08-16 11:21:12,858][02922] Saving new best policy, reward=5.680! +[2024-08-16 11:21:17,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3379.3). Total num frames: 675840. Throughput: 0: 852.7. Samples: 167534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:21:17,847][00349] Avg episode reward: [(0, '5.564')] +[2024-08-16 11:21:22,849][00349] Fps is (10 sec: 3684.3, 60 sec: 3413.0, 300 sec: 3376.7). Total num frames: 692224. Throughput: 0: 864.5. Samples: 172676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:21:22,852][00349] Avg episode reward: [(0, '5.541')] +[2024-08-16 11:21:23,730][02935] Updated weights for policy 0, policy_version 170 (0.0028) +[2024-08-16 11:21:27,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3374.4). Total num frames: 708608. Throughput: 0: 871.2. Samples: 177658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:21:27,849][00349] Avg episode reward: [(0, '5.776')] +[2024-08-16 11:21:27,870][02922] Saving new best policy, reward=5.776! +[2024-08-16 11:21:32,843][00349] Fps is (10 sec: 4098.4, 60 sec: 3549.9, 300 sec: 3410.2). Total num frames: 733184. Throughput: 0: 899.5. Samples: 181050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:21:32,850][00349] Avg episode reward: [(0, '6.231')] +[2024-08-16 11:21:32,859][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth... +[2024-08-16 11:21:32,991][02922] Saving new best policy, reward=6.231! +[2024-08-16 11:21:33,469][02935] Updated weights for policy 0, policy_version 180 (0.0046) +[2024-08-16 11:21:37,846][00349] Fps is (10 sec: 4094.7, 60 sec: 3549.8, 300 sec: 3407.1). Total num frames: 749568. Throughput: 0: 952.6. Samples: 187212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:21:37,849][00349] Avg episode reward: [(0, '6.358')] +[2024-08-16 11:21:37,859][02922] Saving new best policy, reward=6.358! +[2024-08-16 11:21:42,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3404.3). Total num frames: 765952. Throughput: 0: 896.5. Samples: 191278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:21:42,845][00349] Avg episode reward: [(0, '6.255')] +[2024-08-16 11:21:45,715][02935] Updated weights for policy 0, policy_version 190 (0.0038) +[2024-08-16 11:21:47,843][00349] Fps is (10 sec: 3687.5, 60 sec: 3686.4, 300 sec: 3419.3). Total num frames: 786432. Throughput: 0: 893.0. Samples: 194386. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:21:47,845][00349] Avg episode reward: [(0, '6.044')] +[2024-08-16 11:21:52,849][00349] Fps is (10 sec: 4502.9, 60 sec: 3822.6, 300 sec: 3451.1). Total num frames: 811008. Throughput: 0: 955.7. Samples: 201166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:21:52,854][00349] Avg episode reward: [(0, '6.738')] +[2024-08-16 11:21:52,864][02922] Saving new best policy, reward=6.738! +[2024-08-16 11:21:55,877][02935] Updated weights for policy 0, policy_version 200 (0.0031) +[2024-08-16 11:21:57,848][00349] Fps is (10 sec: 3684.7, 60 sec: 3686.1, 300 sec: 3430.4). Total num frames: 823296. Throughput: 0: 926.7. Samples: 206022. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:21:57,858][00349] Avg episode reward: [(0, '6.856')] +[2024-08-16 11:21:57,859][02922] Saving new best policy, reward=6.856! +[2024-08-16 11:22:02,843][00349] Fps is (10 sec: 2868.9, 60 sec: 3686.4, 300 sec: 3427.3). Total num frames: 839680. Throughput: 0: 900.0. Samples: 208032. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:22:02,854][00349] Avg episode reward: [(0, '7.400')] +[2024-08-16 11:22:02,867][02922] Saving new best policy, reward=7.400! +[2024-08-16 11:22:07,188][02935] Updated weights for policy 0, policy_version 210 (0.0024) +[2024-08-16 11:22:07,843][00349] Fps is (10 sec: 3688.2, 60 sec: 3754.7, 300 sec: 3440.7). Total num frames: 860160. Throughput: 0: 927.2. Samples: 214394. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-08-16 11:22:07,849][00349] Avg episode reward: [(0, '7.019')] +[2024-08-16 11:22:12,846][00349] Fps is (10 sec: 3685.3, 60 sec: 3686.2, 300 sec: 3437.4). Total num frames: 876544. Throughput: 0: 935.2. Samples: 219744. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:22:12,850][00349] Avg episode reward: [(0, '7.411')] +[2024-08-16 11:22:12,860][02922] Saving new best policy, reward=7.411! +[2024-08-16 11:22:17,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3418.6). Total num frames: 888832. Throughput: 0: 903.6. Samples: 221712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:22:17,845][00349] Avg episode reward: [(0, '7.340')] +[2024-08-16 11:22:19,594][02935] Updated weights for policy 0, policy_version 220 (0.0019) +[2024-08-16 11:22:22,843][00349] Fps is (10 sec: 3687.5, 60 sec: 3686.8, 300 sec: 3446.9). Total num frames: 913408. Throughput: 0: 895.1. Samples: 227490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:22:22,846][00349] Avg episode reward: [(0, '7.486')] +[2024-08-16 11:22:22,856][02922] Saving new best policy, reward=7.486! +[2024-08-16 11:22:27,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3458.9). Total num frames: 933888. Throughput: 0: 951.9. Samples: 234114. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:22:27,846][00349] Avg episode reward: [(0, '7.490')] +[2024-08-16 11:22:27,851][02922] Saving new best policy, reward=7.490! +[2024-08-16 11:22:29,639][02935] Updated weights for policy 0, policy_version 230 (0.0025) +[2024-08-16 11:22:32,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3455.6). Total num frames: 950272. Throughput: 0: 934.0. Samples: 236414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:22:32,850][00349] Avg episode reward: [(0, '7.930')] +[2024-08-16 11:22:32,863][02922] Saving new best policy, reward=7.930! +[2024-08-16 11:22:37,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3452.4). Total num frames: 966656. Throughput: 0: 876.5. Samples: 240604. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:22:37,850][00349] Avg episode reward: [(0, '7.550')] +[2024-08-16 11:22:40,876][02935] Updated weights for policy 0, policy_version 240 (0.0033) +[2024-08-16 11:22:42,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3478.0). Total num frames: 991232. Throughput: 0: 920.9. Samples: 247460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:22:42,849][00349] Avg episode reward: [(0, '7.682')] +[2024-08-16 11:22:47,845][00349] Fps is (10 sec: 4094.9, 60 sec: 3686.3, 300 sec: 3474.5). Total num frames: 1007616. Throughput: 0: 951.9. Samples: 250868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:22:47,848][00349] Avg episode reward: [(0, '7.771')] +[2024-08-16 11:22:52,765][02935] Updated weights for policy 0, policy_version 250 (0.0031) +[2024-08-16 11:22:52,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3550.2, 300 sec: 3471.2). Total num frames: 1024000. Throughput: 0: 901.8. Samples: 254976. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:22:52,850][00349] Avg episode reward: [(0, '7.744')] +[2024-08-16 11:22:57,843][00349] Fps is (10 sec: 3687.3, 60 sec: 3686.7, 300 sec: 3540.6). Total num frames: 1044480. Throughput: 0: 924.3. Samples: 261334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:22:57,848][00349] Avg episode reward: [(0, '7.476')] +[2024-08-16 11:23:01,766][02935] Updated weights for policy 0, policy_version 260 (0.0052) +[2024-08-16 11:23:02,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3610.0). Total num frames: 1064960. Throughput: 0: 956.0. Samples: 264730. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:23:02,852][00349] Avg episode reward: [(0, '7.954')] +[2024-08-16 11:23:02,925][02922] Saving new best policy, reward=7.954! +[2024-08-16 11:23:07,847][00349] Fps is (10 sec: 3684.9, 60 sec: 3686.2, 300 sec: 3651.6). Total num frames: 1081344. Throughput: 0: 941.1. Samples: 269842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:23:07,849][00349] Avg episode reward: [(0, '8.489')] +[2024-08-16 11:23:07,851][02922] Saving new best policy, reward=8.489! +[2024-08-16 11:23:12,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3754.9, 300 sec: 3637.8). Total num frames: 1101824. Throughput: 0: 910.2. Samples: 275072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:23:12,849][00349] Avg episode reward: [(0, '8.522')] +[2024-08-16 11:23:12,858][02922] Saving new best policy, reward=8.522! +[2024-08-16 11:23:13,828][02935] Updated weights for policy 0, policy_version 270 (0.0030) +[2024-08-16 11:23:17,843][00349] Fps is (10 sec: 4097.7, 60 sec: 3891.2, 300 sec: 3651.7). Total num frames: 1122304. Throughput: 0: 933.1. Samples: 278402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:23:17,850][00349] Avg episode reward: [(0, '8.591')] +[2024-08-16 11:23:17,852][02922] Saving new best policy, reward=8.591! +[2024-08-16 11:23:22,846][00349] Fps is (10 sec: 3685.2, 60 sec: 3754.5, 300 sec: 3651.6). Total num frames: 1138688. Throughput: 0: 981.5. Samples: 284774. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:23:22,848][00349] Avg episode reward: [(0, '8.487')] +[2024-08-16 11:23:24,554][02935] Updated weights for policy 0, policy_version 280 (0.0023) +[2024-08-16 11:23:27,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 1155072. Throughput: 0: 921.3. Samples: 288918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:23:27,848][00349] Avg episode reward: [(0, '8.768')] +[2024-08-16 11:23:27,850][02922] Saving new best policy, reward=8.768! +[2024-08-16 11:23:32,843][00349] Fps is (10 sec: 4097.3, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 1179648. Throughput: 0: 917.8. Samples: 292168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:23:32,846][00349] Avg episode reward: [(0, '8.377')] +[2024-08-16 11:23:32,854][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000288_1179648.pth... +[2024-08-16 11:23:32,975][02922] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000074_303104.pth +[2024-08-16 11:23:35,546][02935] Updated weights for policy 0, policy_version 290 (0.0042) +[2024-08-16 11:23:37,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 1191936. Throughput: 0: 949.2. Samples: 297692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:23:37,852][00349] Avg episode reward: [(0, '8.688')] +[2024-08-16 11:23:42,843][00349] Fps is (10 sec: 2457.6, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 1204224. Throughput: 0: 886.7. Samples: 301234. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:23:42,850][00349] Avg episode reward: [(0, '8.700')] +[2024-08-16 11:23:47,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3550.0, 300 sec: 3623.9). Total num frames: 1220608. Throughput: 0: 856.3. Samples: 303262. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:23:47,848][00349] Avg episode reward: [(0, '8.340')] +[2024-08-16 11:23:49,524][02935] Updated weights for policy 0, policy_version 300 (0.0027) +[2024-08-16 11:23:52,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 1241088. Throughput: 0: 880.5. Samples: 309462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:23:52,847][00349] Avg episode reward: [(0, '8.522')] +[2024-08-16 11:23:57,845][00349] Fps is (10 sec: 4095.2, 60 sec: 3618.0, 300 sec: 3651.7). Total num frames: 1261568. Throughput: 0: 907.7. Samples: 315918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:23:57,852][00349] Avg episode reward: [(0, '8.269')] +[2024-08-16 11:24:00,153][02935] Updated weights for policy 0, policy_version 310 (0.0047) +[2024-08-16 11:24:02,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 1273856. Throughput: 0: 879.1. Samples: 317960. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:24:02,848][00349] Avg episode reward: [(0, '9.094')] +[2024-08-16 11:24:02,858][02922] Saving new best policy, reward=9.094! +[2024-08-16 11:24:07,843][00349] Fps is (10 sec: 3277.4, 60 sec: 3550.1, 300 sec: 3623.9). Total num frames: 1294336. Throughput: 0: 849.2. Samples: 322984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:24:07,845][00349] Avg episode reward: [(0, '9.376')] +[2024-08-16 11:24:07,849][02922] Saving new best policy, reward=9.376! +[2024-08-16 11:24:10,683][02935] Updated weights for policy 0, policy_version 320 (0.0038) +[2024-08-16 11:24:12,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 1318912. Throughput: 0: 907.2. Samples: 329740. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:24:12,845][00349] Avg episode reward: [(0, '10.035')] +[2024-08-16 11:24:12,856][02922] Saving new best policy, reward=10.035! +[2024-08-16 11:24:17,843][00349] Fps is (10 sec: 4095.8, 60 sec: 3549.8, 300 sec: 3651.7). Total num frames: 1335296. Throughput: 0: 899.1. Samples: 332628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:24:17,847][00349] Avg episode reward: [(0, '9.992')] +[2024-08-16 11:24:22,444][02935] Updated weights for policy 0, policy_version 330 (0.0044) +[2024-08-16 11:24:22,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3623.9). Total num frames: 1351680. Throughput: 0: 869.7. Samples: 336828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:24:22,849][00349] Avg episode reward: [(0, '10.416')] +[2024-08-16 11:24:22,857][02922] Saving new best policy, reward=10.416! +[2024-08-16 11:24:27,843][00349] Fps is (10 sec: 3686.6, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 1372160. Throughput: 0: 935.4. Samples: 343326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:24:27,847][00349] Avg episode reward: [(0, '11.079')] +[2024-08-16 11:24:27,849][02922] Saving new best policy, reward=11.079! +[2024-08-16 11:24:32,082][02935] Updated weights for policy 0, policy_version 340 (0.0039) +[2024-08-16 11:24:32,845][00349] Fps is (10 sec: 4095.2, 60 sec: 3549.7, 300 sec: 3651.7). Total num frames: 1392640. Throughput: 0: 964.9. Samples: 346686. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:24:32,847][00349] Avg episode reward: [(0, '11.266')] +[2024-08-16 11:24:32,856][02922] Saving new best policy, reward=11.266! +[2024-08-16 11:24:37,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3623.9). Total num frames: 1404928. Throughput: 0: 929.9. Samples: 351306. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:24:37,849][00349] Avg episode reward: [(0, '11.897')] +[2024-08-16 11:24:37,852][02922] Saving new best policy, reward=11.897! +[2024-08-16 11:24:42,843][00349] Fps is (10 sec: 3277.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 1425408. Throughput: 0: 909.4. Samples: 356840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:24:42,845][00349] Avg episode reward: [(0, '12.068')] +[2024-08-16 11:24:42,904][02922] Saving new best policy, reward=12.068! +[2024-08-16 11:24:43,871][02935] Updated weights for policy 0, policy_version 350 (0.0030) +[2024-08-16 11:24:47,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 1449984. Throughput: 0: 938.1. Samples: 360176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:24:47,849][00349] Avg episode reward: [(0, '12.868')] +[2024-08-16 11:24:47,852][02922] Saving new best policy, reward=12.868! +[2024-08-16 11:24:52,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 1462272. Throughput: 0: 953.5. Samples: 365890. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:24:52,848][00349] Avg episode reward: [(0, '13.478')] +[2024-08-16 11:24:52,861][02922] Saving new best policy, reward=13.478! +[2024-08-16 11:24:56,185][02935] Updated weights for policy 0, policy_version 360 (0.0042) +[2024-08-16 11:24:57,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3618.2, 300 sec: 3610.0). Total num frames: 1478656. Throughput: 0: 895.6. Samples: 370042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:24:57,847][00349] Avg episode reward: [(0, '14.069')] +[2024-08-16 11:24:57,850][02922] Saving new best policy, reward=14.069! +[2024-08-16 11:25:02,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3623.9). Total num frames: 1503232. Throughput: 0: 903.0. Samples: 373264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:25:02,846][00349] Avg episode reward: [(0, '15.336')] +[2024-08-16 11:25:02,854][02922] Saving new best policy, reward=15.336! +[2024-08-16 11:25:05,632][02935] Updated weights for policy 0, policy_version 370 (0.0033) +[2024-08-16 11:25:07,845][00349] Fps is (10 sec: 4095.2, 60 sec: 3754.5, 300 sec: 3637.8). Total num frames: 1519616. Throughput: 0: 954.5. Samples: 379780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:25:07,853][00349] Avg episode reward: [(0, '15.369')] +[2024-08-16 11:25:07,863][02922] Saving new best policy, reward=15.369! +[2024-08-16 11:25:12,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 1536000. Throughput: 0: 905.0. Samples: 384052. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:25:12,845][00349] Avg episode reward: [(0, '15.483')] +[2024-08-16 11:25:12,853][02922] Saving new best policy, reward=15.483! +[2024-08-16 11:25:17,583][02935] Updated weights for policy 0, policy_version 380 (0.0030) +[2024-08-16 11:25:17,843][00349] Fps is (10 sec: 3687.1, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 1556480. Throughput: 0: 887.3. Samples: 386614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:25:17,848][00349] Avg episode reward: [(0, '15.294')] +[2024-08-16 11:25:22,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3623.9). Total num frames: 1576960. Throughput: 0: 939.6. Samples: 393586. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:25:22,850][00349] Avg episode reward: [(0, '14.099')] +[2024-08-16 11:25:27,851][00349] Fps is (10 sec: 3683.4, 60 sec: 3685.9, 300 sec: 3637.7). Total num frames: 1593344. Throughput: 0: 937.5. Samples: 399036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:25:27,859][00349] Avg episode reward: [(0, '13.492')] +[2024-08-16 11:25:28,279][02935] Updated weights for policy 0, policy_version 390 (0.0035) +[2024-08-16 11:25:32,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3637.8). Total num frames: 1609728. Throughput: 0: 909.1. Samples: 401086. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:25:32,851][00349] Avg episode reward: [(0, '13.395')] +[2024-08-16 11:25:32,866][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000393_1609728.pth... +[2024-08-16 11:25:33,022][02922] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth +[2024-08-16 11:25:37,843][00349] Fps is (10 sec: 4099.3, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 1634304. Throughput: 0: 920.3. Samples: 407302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:25:37,848][00349] Avg episode reward: [(0, '11.959')] +[2024-08-16 11:25:38,307][02935] Updated weights for policy 0, policy_version 400 (0.0024) +[2024-08-16 11:25:42,850][00349] Fps is (10 sec: 4502.5, 60 sec: 3822.5, 300 sec: 3693.3). Total num frames: 1654784. Throughput: 0: 979.1. Samples: 414110. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:25:42,852][00349] Avg episode reward: [(0, '11.398')] +[2024-08-16 11:25:47,845][00349] Fps is (10 sec: 3276.1, 60 sec: 3618.0, 300 sec: 3679.4). Total num frames: 1667072. Throughput: 0: 954.9. Samples: 416236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:25:47,848][00349] Avg episode reward: [(0, '11.628')] +[2024-08-16 11:25:50,113][02935] Updated weights for policy 0, policy_version 410 (0.0044) +[2024-08-16 11:25:52,843][00349] Fps is (10 sec: 3279.1, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 1687552. Throughput: 0: 921.3. Samples: 421238. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:25:52,845][00349] Avg episode reward: [(0, '11.389')] +[2024-08-16 11:25:57,843][00349] Fps is (10 sec: 4506.6, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 1712128. Throughput: 0: 977.2. Samples: 428028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:25:57,845][00349] Avg episode reward: [(0, '12.256')] +[2024-08-16 11:25:59,465][02935] Updated weights for policy 0, policy_version 420 (0.0037) +[2024-08-16 11:26:02,844][00349] Fps is (10 sec: 4095.6, 60 sec: 3754.6, 300 sec: 3707.2). Total num frames: 1728512. Throughput: 0: 985.4. Samples: 430958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:26:02,849][00349] Avg episode reward: [(0, '12.856')] +[2024-08-16 11:26:07,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3693.3). Total num frames: 1744896. Throughput: 0: 923.5. Samples: 435142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:26:07,848][00349] Avg episode reward: [(0, '13.882')] +[2024-08-16 11:26:11,236][02935] Updated weights for policy 0, policy_version 430 (0.0040) +[2024-08-16 11:26:12,843][00349] Fps is (10 sec: 3686.7, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1765376. Throughput: 0: 948.7. Samples: 441720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:26:12,845][00349] Avg episode reward: [(0, '14.631')] +[2024-08-16 11:26:17,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3707.3). Total num frames: 1785856. Throughput: 0: 979.0. Samples: 445142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:26:17,850][00349] Avg episode reward: [(0, '15.301')] +[2024-08-16 11:26:22,437][02935] Updated weights for policy 0, policy_version 440 (0.0018) +[2024-08-16 11:26:22,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1802240. Throughput: 0: 947.9. Samples: 449956. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:26:22,848][00349] Avg episode reward: [(0, '15.472')] +[2024-08-16 11:26:27,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3823.4, 300 sec: 3693.3). Total num frames: 1822720. Throughput: 0: 919.3. Samples: 455474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:26:27,847][00349] Avg episode reward: [(0, '14.746')] +[2024-08-16 11:26:32,338][02935] Updated weights for policy 0, policy_version 450 (0.0036) +[2024-08-16 11:26:32,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3707.3). Total num frames: 1843200. Throughput: 0: 942.9. Samples: 458666. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:26:32,847][00349] Avg episode reward: [(0, '14.358')] +[2024-08-16 11:26:37,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1855488. Throughput: 0: 960.0. Samples: 464438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:26:37,845][00349] Avg episode reward: [(0, '14.146')] +[2024-08-16 11:26:42,843][00349] Fps is (10 sec: 2457.6, 60 sec: 3550.3, 300 sec: 3665.6). Total num frames: 1867776. Throughput: 0: 880.7. Samples: 467658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:26:42,848][00349] Avg episode reward: [(0, '14.358')] +[2024-08-16 11:26:47,111][02935] Updated weights for policy 0, policy_version 460 (0.0037) +[2024-08-16 11:26:47,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3618.3, 300 sec: 3637.9). Total num frames: 1884160. Throughput: 0: 854.9. Samples: 469426. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:26:47,845][00349] Avg episode reward: [(0, '14.933')] +[2024-08-16 11:26:52,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1908736. Throughput: 0: 909.8. Samples: 476082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:26:52,845][00349] Avg episode reward: [(0, '15.435')] +[2024-08-16 11:26:56,305][02935] Updated weights for policy 0, policy_version 470 (0.0020) +[2024-08-16 11:26:57,843][00349] Fps is (10 sec: 4505.7, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1929216. Throughput: 0: 900.5. Samples: 482242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:26:57,853][00349] Avg episode reward: [(0, '16.557')] +[2024-08-16 11:26:57,856][02922] Saving new best policy, reward=16.557! +[2024-08-16 11:27:02,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 1941504. Throughput: 0: 867.2. Samples: 484166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:27:02,847][00349] Avg episode reward: [(0, '16.029')] +[2024-08-16 11:27:07,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1961984. Throughput: 0: 879.7. Samples: 489542. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-08-16 11:27:07,851][00349] Avg episode reward: [(0, '16.352')] +[2024-08-16 11:27:08,147][02935] Updated weights for policy 0, policy_version 480 (0.0025) +[2024-08-16 11:27:12,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 1986560. Throughput: 0: 907.2. Samples: 496298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:27:12,851][00349] Avg episode reward: [(0, '17.282')] +[2024-08-16 11:27:12,860][02922] Saving new best policy, reward=17.282! +[2024-08-16 11:27:17,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 1998848. Throughput: 0: 893.4. Samples: 498870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:27:17,845][00349] Avg episode reward: [(0, '16.775')] +[2024-08-16 11:27:19,982][02935] Updated weights for policy 0, policy_version 490 (0.0029) +[2024-08-16 11:27:22,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 2015232. Throughput: 0: 860.4. Samples: 503154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:27:22,850][00349] Avg episode reward: [(0, '16.453')] +[2024-08-16 11:27:27,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2039808. Throughput: 0: 940.0. Samples: 509956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:27:27,850][00349] Avg episode reward: [(0, '16.556')] +[2024-08-16 11:27:29,329][02935] Updated weights for policy 0, policy_version 500 (0.0020) +[2024-08-16 11:27:32,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3693.3). Total num frames: 2056192. Throughput: 0: 972.6. Samples: 513192. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:27:32,846][00349] Avg episode reward: [(0, '16.391')] +[2024-08-16 11:27:32,859][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000502_2056192.pth... +[2024-08-16 11:27:33,040][02922] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000288_1179648.pth +[2024-08-16 11:27:37,847][00349] Fps is (10 sec: 3275.5, 60 sec: 3617.9, 300 sec: 3665.5). Total num frames: 2072576. Throughput: 0: 915.5. Samples: 517284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:27:37,850][00349] Avg episode reward: [(0, '15.660')] +[2024-08-16 11:27:41,575][02935] Updated weights for policy 0, policy_version 510 (0.0019) +[2024-08-16 11:27:42,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 2093056. Throughput: 0: 909.6. Samples: 523176. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:27:42,851][00349] Avg episode reward: [(0, '15.354')] +[2024-08-16 11:27:47,843][00349] Fps is (10 sec: 4097.6, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 2113536. Throughput: 0: 941.8. Samples: 526546. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:27:47,845][00349] Avg episode reward: [(0, '14.243')] +[2024-08-16 11:27:52,267][02935] Updated weights for policy 0, policy_version 520 (0.0025) +[2024-08-16 11:27:52,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2129920. Throughput: 0: 948.0. Samples: 532202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:27:52,845][00349] Avg episode reward: [(0, '14.698')] +[2024-08-16 11:27:57,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2146304. Throughput: 0: 906.1. Samples: 537072. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:27:57,850][00349] Avg episode reward: [(0, '16.252')] +[2024-08-16 11:28:02,539][02935] Updated weights for policy 0, policy_version 530 (0.0035) +[2024-08-16 11:28:02,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3693.4). Total num frames: 2170880. Throughput: 0: 920.6. Samples: 540296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:28:02,851][00349] Avg episode reward: [(0, '17.679')] +[2024-08-16 11:28:02,860][02922] Saving new best policy, reward=17.679! +[2024-08-16 11:28:07,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 2187264. Throughput: 0: 972.8. Samples: 546930. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:28:07,848][00349] Avg episode reward: [(0, '17.801')] +[2024-08-16 11:28:07,863][02922] Saving new best policy, reward=17.801! +[2024-08-16 11:28:12,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3651.7). Total num frames: 2199552. Throughput: 0: 909.1. Samples: 550866. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:28:12,847][00349] Avg episode reward: [(0, '19.095')] +[2024-08-16 11:28:12,896][02922] Saving new best policy, reward=19.095! +[2024-08-16 11:28:14,781][02935] Updated weights for policy 0, policy_version 540 (0.0039) +[2024-08-16 11:28:17,847][00349] Fps is (10 sec: 3684.8, 60 sec: 3754.4, 300 sec: 3679.4). Total num frames: 2224128. Throughput: 0: 902.3. Samples: 553798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:28:17,854][00349] Avg episode reward: [(0, '18.300')] +[2024-08-16 11:28:22,843][00349] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2248704. Throughput: 0: 966.8. Samples: 560786. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-08-16 11:28:22,846][00349] Avg episode reward: [(0, '18.721')] +[2024-08-16 11:28:24,112][02935] Updated weights for policy 0, policy_version 550 (0.0023) +[2024-08-16 11:28:27,843][00349] Fps is (10 sec: 3688.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2260992. Throughput: 0: 946.0. Samples: 565748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:28:27,851][00349] Avg episode reward: [(0, '18.621')] +[2024-08-16 11:28:32,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2277376. Throughput: 0: 919.0. Samples: 567900. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-08-16 11:28:32,846][00349] Avg episode reward: [(0, '20.694')] +[2024-08-16 11:28:32,942][02922] Saving new best policy, reward=20.694! +[2024-08-16 11:28:35,589][02935] Updated weights for policy 0, policy_version 560 (0.0030) +[2024-08-16 11:28:37,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3823.2, 300 sec: 3721.1). Total num frames: 2301952. Throughput: 0: 939.1. Samples: 574460. Policy #0 lag: (min: 0.0, avg: 0.8, max: 1.0) +[2024-08-16 11:28:37,846][00349] Avg episode reward: [(0, '20.379')] +[2024-08-16 11:28:42,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 2322432. Throughput: 0: 967.8. Samples: 580622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:28:42,846][00349] Avg episode reward: [(0, '19.954')] +[2024-08-16 11:28:47,257][02935] Updated weights for policy 0, policy_version 570 (0.0021) +[2024-08-16 11:28:47,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2334720. Throughput: 0: 940.9. Samples: 582636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:28:47,848][00349] Avg episode reward: [(0, '19.564')] +[2024-08-16 11:28:52,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3707.3). Total num frames: 2355200. Throughput: 0: 917.5. Samples: 588216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:28:52,850][00349] Avg episode reward: [(0, '19.738')] +[2024-08-16 11:28:56,555][02935] Updated weights for policy 0, policy_version 580 (0.0043) +[2024-08-16 11:28:57,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 2379776. Throughput: 0: 982.2. Samples: 595064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:28:57,849][00349] Avg episode reward: [(0, '19.079')] +[2024-08-16 11:29:02,844][00349] Fps is (10 sec: 3685.9, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 2392064. Throughput: 0: 966.0. Samples: 597266. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:29:02,847][00349] Avg episode reward: [(0, '19.481')] +[2024-08-16 11:29:07,843][00349] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2408448. Throughput: 0: 907.0. Samples: 601600. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:29:07,849][00349] Avg episode reward: [(0, '19.177')] +[2024-08-16 11:29:08,759][02935] Updated weights for policy 0, policy_version 590 (0.0026) +[2024-08-16 11:29:12,843][00349] Fps is (10 sec: 4096.5, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 2433024. Throughput: 0: 943.2. Samples: 608194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:29:12,845][00349] Avg episode reward: [(0, '17.335')] +[2024-08-16 11:29:17,844][00349] Fps is (10 sec: 4095.5, 60 sec: 3754.9, 300 sec: 3721.1). Total num frames: 2449408. Throughput: 0: 969.5. Samples: 611530. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:29:17,847][00349] Avg episode reward: [(0, '17.515')] +[2024-08-16 11:29:19,669][02935] Updated weights for policy 0, policy_version 600 (0.0020) +[2024-08-16 11:29:22,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 2465792. Throughput: 0: 919.6. Samples: 615844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:29:22,853][00349] Avg episode reward: [(0, '17.034')] +[2024-08-16 11:29:27,843][00349] Fps is (10 sec: 3686.9, 60 sec: 3754.7, 300 sec: 3707.3). Total num frames: 2486272. Throughput: 0: 918.8. Samples: 621966. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2024-08-16 11:29:27,846][00349] Avg episode reward: [(0, '17.255')] +[2024-08-16 11:29:29,681][02935] Updated weights for policy 0, policy_version 610 (0.0013) +[2024-08-16 11:29:32,843][00349] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 2510848. Throughput: 0: 948.6. Samples: 625322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:29:32,845][00349] Avg episode reward: [(0, '17.204')] +[2024-08-16 11:29:32,866][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000613_2510848.pth... +[2024-08-16 11:29:33,033][02922] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000393_1609728.pth +[2024-08-16 11:29:37,849][00349] Fps is (10 sec: 3274.9, 60 sec: 3617.8, 300 sec: 3707.2). Total num frames: 2519040. Throughput: 0: 928.0. Samples: 629982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:29:37,851][00349] Avg episode reward: [(0, '17.373')] +[2024-08-16 11:29:42,847][00349] Fps is (10 sec: 2047.2, 60 sec: 3481.4, 300 sec: 3665.5). Total num frames: 2531328. Throughput: 0: 852.2. Samples: 633416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:29:42,853][00349] Avg episode reward: [(0, '17.778')] +[2024-08-16 11:29:44,717][02935] Updated weights for policy 0, policy_version 620 (0.0059) +[2024-08-16 11:29:47,843][00349] Fps is (10 sec: 3278.7, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2551808. Throughput: 0: 853.7. Samples: 635680. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:29:47,849][00349] Avg episode reward: [(0, '16.815')] +[2024-08-16 11:29:52,843][00349] Fps is (10 sec: 4097.7, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 2572288. Throughput: 0: 908.2. Samples: 642470. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:29:52,847][00349] Avg episode reward: [(0, '16.264')] +[2024-08-16 11:29:53,970][02935] Updated weights for policy 0, policy_version 630 (0.0030) +[2024-08-16 11:29:57,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3679.5). Total num frames: 2588672. Throughput: 0: 875.8. Samples: 647606. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-08-16 11:29:57,845][00349] Avg episode reward: [(0, '17.306')] +[2024-08-16 11:30:02,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 2605056. Throughput: 0: 848.2. Samples: 649700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:30:02,846][00349] Avg episode reward: [(0, '19.258')] +[2024-08-16 11:30:06,125][02935] Updated weights for policy 0, policy_version 640 (0.0020) +[2024-08-16 11:30:07,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2629632. Throughput: 0: 888.1. Samples: 655808. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:30:07,845][00349] Avg episode reward: [(0, '19.151')] +[2024-08-16 11:30:12,843][00349] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3693.3). Total num frames: 2646016. Throughput: 0: 894.5. Samples: 662218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:30:12,845][00349] Avg episode reward: [(0, '20.303')] +[2024-08-16 11:30:17,699][02935] Updated weights for policy 0, policy_version 650 (0.0021) +[2024-08-16 11:30:17,843][00349] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 2662400. Throughput: 0: 866.3. Samples: 664306. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-08-16 11:30:17,850][00349] Avg episode reward: [(0, '19.799')] +[2024-08-16 11:30:22,843][00349] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.4). Total num frames: 2682880. Throughput: 0: 885.0. Samples: 669804. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-08-16 11:30:22,850][00349] Avg episode reward: [(0, '18.498')] +[2024-08-16 11:30:26,625][02935] Updated weights for policy 0, policy_version 660 (0.0027) +[2024-08-16 11:30:27,843][00349] Fps is (10 sec: 4505.4, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2707456. Throughput: 0: 961.9. Samples: 676698. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-08-16 11:30:27,845][00349] Avg episode reward: [(0, '17.893')] +[2024-08-16 11:30:32,848][00349] Fps is (10 sec: 3684.6, 60 sec: 3481.3, 300 sec: 3679.4). Total num frames: 2719744. Throughput: 0: 972.9. Samples: 679466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:30:32,856][00349] Avg episode reward: [(0, '18.035')] +[2024-08-16 11:30:37,843][00349] Fps is (10 sec: 3276.9, 60 sec: 3686.8, 300 sec: 3679.5). Total num frames: 2740224. Throughput: 0: 914.0. Samples: 683602. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:30:37,851][00349] Avg episode reward: [(0, '19.220')] +[2024-08-16 11:30:38,582][02935] Updated weights for policy 0, policy_version 670 (0.0029) +[2024-08-16 11:30:42,843][00349] Fps is (10 sec: 4098.0, 60 sec: 3823.2, 300 sec: 3707.3). Total num frames: 2760704. Throughput: 0: 952.7. Samples: 690476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-08-16 11:30:42,845][00349] Avg episode reward: [(0, '20.882')] +[2024-08-16 11:30:42,861][02922] Saving new best policy, reward=20.882! +[2024-08-16 11:30:47,845][00349] Fps is (10 sec: 4095.0, 60 sec: 3822.8, 300 sec: 3707.2). Total num frames: 2781184. Throughput: 0: 982.3. Samples: 693904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-08-16 11:30:47,848][00349] Avg episode reward: [(0, '21.408')] +[2024-08-16 11:30:47,854][02922] Saving new best policy, reward=21.408! +[2024-08-16 11:30:48,700][02935] Updated weights for policy 0, policy_version 680 (0.0034) +[2024-08-16 11:30:51,577][00349] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 349], exiting... +[2024-08-16 11:30:51,585][00349] Runner profile tree view: +main_loop: 793.7649 +[2024-08-16 11:30:51,587][00349] Collected {0: 2789376}, FPS: 3514.1 +[2024-08-16 11:30:51,608][02922] Stopping Batcher_0... +[2024-08-16 11:30:51,609][02922] Loop batcher_evt_loop terminating... +[2024-08-16 11:30:51,637][02937] EvtLoop [rollout_proc2_evt_loop, process=rollout_proc2] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance2'), args=(0, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,706][02937] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc2_evt_loop +[2024-08-16 11:30:51,675][02940] EvtLoop [rollout_proc5_evt_loop, process=rollout_proc5] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance5'), args=(0, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,735][02940] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc5_evt_loop +[2024-08-16 11:30:51,713][02938] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance1'), args=(1, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,678][02943] EvtLoop [rollout_proc7_evt_loop, process=rollout_proc7] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance7'), args=(0, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,771][02943] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc7_evt_loop +[2024-08-16 11:30:51,771][02938] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc1_evt_loop +[2024-08-16 11:30:51,827][02922] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000682_2793472.pth... +[2024-08-16 11:30:51,783][02939] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance3'), args=(0, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,857][02939] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc3_evt_loop +[2024-08-16 11:30:51,957][02935] Weights refcount: 2 0 +[2024-08-16 11:30:51,963][02935] Stopping InferenceWorker_p0-w0... +[2024-08-16 11:30:51,964][02935] Loop inference_proc0-0_evt_loop terminating... +[2024-08-16 11:30:51,818][02942] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance4'), args=(1, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,969][02942] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc4_evt_loop +[2024-08-16 11:30:51,976][02941] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance6'), args=(1, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:51,982][02941] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc6_evt_loop +[2024-08-16 11:30:52,094][02936] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance0'), args=(1, 0) +Traceback (most recent call last): + File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts + complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts + new_obs, rewards, terminated, truncated, infos = e.step(actions) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step + return self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2024-08-16 11:30:52,177][02936] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc0_evt_loop +[2024-08-16 11:30:52,318][02922] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000502_2056192.pth +[2024-08-16 11:30:52,347][02922] Stopping LearnerWorker_p0... +[2024-08-16 11:30:52,354][02922] Loop learner_proc0_evt_loop terminating... +[2024-08-16 11:31:04,733][00349] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2024-08-16 11:31:04,737][00349] Overriding arg 'num_workers' with value 1 passed from command line +[2024-08-16 11:31:04,740][00349] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-08-16 11:31:04,743][00349] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-08-16 11:31:04,745][00349] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-08-16 11:31:04,748][00349] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-08-16 11:31:04,750][00349] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2024-08-16 11:31:04,751][00349] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-08-16 11:31:04,754][00349] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2024-08-16 11:31:04,755][00349] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2024-08-16 11:31:04,756][00349] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-08-16 11:31:04,757][00349] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-08-16 11:31:04,761][00349] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-08-16 11:31:04,762][00349] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-08-16 11:31:04,763][00349] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-08-16 11:31:04,817][00349] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-08-16 11:31:04,822][00349] RunningMeanStd input shape: (3, 72, 128) +[2024-08-16 11:31:04,826][00349] RunningMeanStd input shape: (1,) +[2024-08-16 11:31:04,856][00349] ConvEncoder: input_channels=3 +[2024-08-16 11:31:04,963][00349] Conv encoder output size: 512 +[2024-08-16 11:31:04,965][00349] Policy head output size: 512 +[2024-08-16 11:31:05,143][00349] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000682_2793472.pth... +[2024-08-16 11:31:05,903][00349] Num frames 100... +[2024-08-16 11:31:06,032][00349] Num frames 200... +[2024-08-16 11:31:06,166][00349] Num frames 300... +[2024-08-16 11:31:06,287][00349] Num frames 400... +[2024-08-16 11:31:06,410][00349] Num frames 500... +[2024-08-16 11:31:06,531][00349] Num frames 600... +[2024-08-16 11:31:06,660][00349] Num frames 700... +[2024-08-16 11:31:06,781][00349] Num frames 800... +[2024-08-16 11:31:06,904][00349] Num frames 900... +[2024-08-16 11:31:07,028][00349] Num frames 1000... +[2024-08-16 11:31:07,161][00349] Num frames 1100... +[2024-08-16 11:31:07,282][00349] Num frames 1200... +[2024-08-16 11:31:07,401][00349] Num frames 1300... +[2024-08-16 11:31:07,550][00349] Avg episode rewards: #0: 31.760, true rewards: #0: 13.760 +[2024-08-16 11:31:07,552][00349] Avg episode reward: 31.760, avg true_objective: 13.760 +[2024-08-16 11:31:07,586][00349] Num frames 1400... +[2024-08-16 11:31:07,717][00349] Num frames 1500... +[2024-08-16 11:31:07,833][00349] Num frames 1600... +[2024-08-16 11:31:07,952][00349] Num frames 1700... +[2024-08-16 11:31:08,071][00349] Num frames 1800... +[2024-08-16 11:31:08,199][00349] Num frames 1900... +[2024-08-16 11:31:08,335][00349] Num frames 2000... +[2024-08-16 11:31:08,457][00349] Num frames 2100... +[2024-08-16 11:31:08,578][00349] Num frames 2200... +[2024-08-16 11:31:08,706][00349] Num frames 2300... +[2024-08-16 11:31:08,826][00349] Num frames 2400... +[2024-08-16 11:31:08,996][00349] Avg episode rewards: #0: 26.980, true rewards: #0: 12.480 +[2024-08-16 11:31:08,998][00349] Avg episode reward: 26.980, avg true_objective: 12.480 +[2024-08-16 11:31:09,007][00349] Num frames 2500... +[2024-08-16 11:31:09,127][00349] Num frames 2600... +[2024-08-16 11:31:09,253][00349] Num frames 2700... +[2024-08-16 11:31:09,375][00349] Num frames 2800... +[2024-08-16 11:31:09,495][00349] Num frames 2900... +[2024-08-16 11:31:09,618][00349] Num frames 3000... +[2024-08-16 11:31:09,744][00349] Num frames 3100... +[2024-08-16 11:31:09,863][00349] Num frames 3200... +[2024-08-16 11:31:09,995][00349] Num frames 3300... +[2024-08-16 11:31:10,113][00349] Num frames 3400... +[2024-08-16 11:31:10,239][00349] Num frames 3500... +[2024-08-16 11:31:10,356][00349] Num frames 3600... +[2024-08-16 11:31:10,432][00349] Avg episode rewards: #0: 25.387, true rewards: #0: 12.053 +[2024-08-16 11:31:10,434][00349] Avg episode reward: 25.387, avg true_objective: 12.053 +[2024-08-16 11:31:10,541][00349] Num frames 3700... +[2024-08-16 11:31:10,676][00349] Num frames 3800... +[2024-08-16 11:31:10,796][00349] Num frames 3900... +[2024-08-16 11:31:10,917][00349] Num frames 4000... +[2024-08-16 11:31:11,040][00349] Num frames 4100... +[2024-08-16 11:31:11,169][00349] Num frames 4200... +[2024-08-16 11:31:11,301][00349] Num frames 4300... +[2024-08-16 11:31:11,382][00349] Avg episode rewards: #0: 22.550, true rewards: #0: 10.800 +[2024-08-16 11:31:11,384][00349] Avg episode reward: 22.550, avg true_objective: 10.800 +[2024-08-16 11:31:11,483][00349] Num frames 4400... +[2024-08-16 11:31:11,612][00349] Num frames 4500... +[2024-08-16 11:31:11,738][00349] Num frames 4600... +[2024-08-16 11:31:11,859][00349] Num frames 4700... +[2024-08-16 11:31:11,999][00349] Avg episode rewards: #0: 19.136, true rewards: #0: 9.536 +[2024-08-16 11:31:12,001][00349] Avg episode reward: 19.136, avg true_objective: 9.536 +[2024-08-16 11:31:12,043][00349] Num frames 4800... +[2024-08-16 11:31:12,163][00349] Num frames 4900... +[2024-08-16 11:31:12,291][00349] Num frames 5000... +[2024-08-16 11:31:12,412][00349] Num frames 5100... +[2024-08-16 11:31:12,533][00349] Num frames 5200... +[2024-08-16 11:31:12,664][00349] Num frames 5300... +[2024-08-16 11:31:12,783][00349] Num frames 5400... +[2024-08-16 11:31:12,904][00349] Num frames 5500... +[2024-08-16 11:31:13,033][00349] Num frames 5600... +[2024-08-16 11:31:13,155][00349] Num frames 5700... +[2024-08-16 11:31:13,283][00349] Num frames 5800... +[2024-08-16 11:31:13,408][00349] Avg episode rewards: #0: 19.927, true rewards: #0: 9.760 +[2024-08-16 11:31:13,410][00349] Avg episode reward: 19.927, avg true_objective: 9.760 +[2024-08-16 11:31:13,466][00349] Num frames 5900... +[2024-08-16 11:31:13,595][00349] Num frames 6000... +[2024-08-16 11:31:13,725][00349] Num frames 6100... +[2024-08-16 11:31:13,848][00349] Num frames 6200... +[2024-08-16 11:31:13,969][00349] Num frames 6300... +[2024-08-16 11:31:14,089][00349] Num frames 6400... +[2024-08-16 11:31:14,211][00349] Num frames 6500... +[2024-08-16 11:31:14,338][00349] Num frames 6600... +[2024-08-16 11:31:14,495][00349] Avg episode rewards: #0: 19.840, true rewards: #0: 9.554 +[2024-08-16 11:31:14,498][00349] Avg episode reward: 19.840, avg true_objective: 9.554 +[2024-08-16 11:31:14,516][00349] Num frames 6700... +[2024-08-16 11:31:14,642][00349] Num frames 6800... +[2024-08-16 11:31:14,764][00349] Num frames 6900... +[2024-08-16 11:31:14,895][00349] Num frames 7000... +[2024-08-16 11:31:15,065][00349] Num frames 7100... +[2024-08-16 11:31:15,232][00349] Num frames 7200... +[2024-08-16 11:31:15,404][00349] Num frames 7300... +[2024-08-16 11:31:15,566][00349] Num frames 7400... +[2024-08-16 11:31:15,746][00349] Num frames 7500... +[2024-08-16 11:31:15,888][00349] Avg episode rewards: #0: 19.190, true rewards: #0: 9.440 +[2024-08-16 11:31:15,890][00349] Avg episode reward: 19.190, avg true_objective: 9.440 +[2024-08-16 11:31:15,970][00349] Num frames 7600... +[2024-08-16 11:31:16,135][00349] Num frames 7700... +[2024-08-16 11:31:16,303][00349] Num frames 7800... +[2024-08-16 11:31:16,480][00349] Num frames 7900... +[2024-08-16 11:31:16,664][00349] Num frames 8000... +[2024-08-16 11:31:16,843][00349] Num frames 8100... +[2024-08-16 11:31:17,017][00349] Num frames 8200... +[2024-08-16 11:31:17,196][00349] Num frames 8300... +[2024-08-16 11:31:17,374][00349] Num frames 8400... +[2024-08-16 11:31:17,524][00349] Num frames 8500... +[2024-08-16 11:31:17,653][00349] Num frames 8600... +[2024-08-16 11:31:17,776][00349] Num frames 8700... +[2024-08-16 11:31:17,897][00349] Num frames 8800... +[2024-08-16 11:31:18,032][00349] Avg episode rewards: #0: 19.960, true rewards: #0: 9.849 +[2024-08-16 11:31:18,034][00349] Avg episode reward: 19.960, avg true_objective: 9.849 +[2024-08-16 11:31:18,080][00349] Num frames 8900... +[2024-08-16 11:31:18,204][00349] Num frames 9000... +[2024-08-16 11:31:18,326][00349] Num frames 9100... +[2024-08-16 11:31:18,447][00349] Num frames 9200... +[2024-08-16 11:31:18,580][00349] Num frames 9300... +[2024-08-16 11:31:18,709][00349] Num frames 9400... +[2024-08-16 11:31:18,829][00349] Num frames 9500... +[2024-08-16 11:31:18,961][00349] Num frames 9600... +[2024-08-16 11:31:19,081][00349] Num frames 9700... +[2024-08-16 11:31:19,206][00349] Num frames 9800... +[2024-08-16 11:31:19,326][00349] Num frames 9900... +[2024-08-16 11:31:19,448][00349] Num frames 10000... +[2024-08-16 11:31:19,579][00349] Num frames 10100... +[2024-08-16 11:31:19,711][00349] Num frames 10200... +[2024-08-16 11:31:19,835][00349] Num frames 10300... +[2024-08-16 11:31:19,914][00349] Avg episode rewards: #0: 20.920, true rewards: #0: 10.320 +[2024-08-16 11:31:19,917][00349] Avg episode reward: 20.920, avg true_objective: 10.320 +[2024-08-16 11:32:24,255][00349] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2024-08-16 11:32:51,556][00349] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2024-08-16 11:32:51,558][00349] Overriding arg 'num_workers' with value 1 passed from command line +[2024-08-16 11:32:51,559][00349] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-08-16 11:32:51,561][00349] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-08-16 11:32:51,562][00349] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-08-16 11:32:51,565][00349] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-08-16 11:32:51,566][00349] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2024-08-16 11:32:51,568][00349] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-08-16 11:32:51,570][00349] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2024-08-16 11:32:51,571][00349] Adding new argument 'hf_repository'='mashaal24/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2024-08-16 11:32:51,573][00349] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-08-16 11:32:51,576][00349] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-08-16 11:32:51,578][00349] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-08-16 11:32:51,582][00349] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-08-16 11:32:51,584][00349] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-08-16 11:32:51,613][00349] RunningMeanStd input shape: (3, 72, 128) +[2024-08-16 11:32:51,616][00349] RunningMeanStd input shape: (1,) +[2024-08-16 11:32:51,633][00349] ConvEncoder: input_channels=3 +[2024-08-16 11:32:51,678][00349] Conv encoder output size: 512 +[2024-08-16 11:32:51,680][00349] Policy head output size: 512 +[2024-08-16 11:32:51,702][00349] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000682_2793472.pth... +[2024-08-16 11:32:52,148][00349] Num frames 100... +[2024-08-16 11:32:52,269][00349] Num frames 200... +[2024-08-16 11:32:52,390][00349] Num frames 300... +[2024-08-16 11:32:52,518][00349] Num frames 400... +[2024-08-16 11:32:52,642][00349] Num frames 500... +[2024-08-16 11:32:52,758][00349] Num frames 600... +[2024-08-16 11:32:52,879][00349] Num frames 700... +[2024-08-16 11:32:52,997][00349] Num frames 800... +[2024-08-16 11:32:53,117][00349] Num frames 900... +[2024-08-16 11:32:53,236][00349] Num frames 1000... +[2024-08-16 11:32:53,357][00349] Num frames 1100... +[2024-08-16 11:32:53,483][00349] Num frames 1200... +[2024-08-16 11:32:53,601][00349] Avg episode rewards: #0: 26.480, true rewards: #0: 12.480 +[2024-08-16 11:32:53,603][00349] Avg episode reward: 26.480, avg true_objective: 12.480 +[2024-08-16 11:32:53,676][00349] Num frames 1300... +[2024-08-16 11:32:53,805][00349] Num frames 1400... +[2024-08-16 11:32:53,925][00349] Num frames 1500... +[2024-08-16 11:32:54,051][00349] Num frames 1600... +[2024-08-16 11:32:54,104][00349] Avg episode rewards: #0: 15.000, true rewards: #0: 8.000 +[2024-08-16 11:32:54,105][00349] Avg episode reward: 15.000, avg true_objective: 8.000 +[2024-08-16 11:32:54,224][00349] Num frames 1700... +[2024-08-16 11:32:54,346][00349] Num frames 1800... +[2024-08-16 11:32:54,467][00349] Num frames 1900... +[2024-08-16 11:32:54,598][00349] Num frames 2000... +[2024-08-16 11:32:54,729][00349] Num frames 2100... +[2024-08-16 11:32:54,849][00349] Num frames 2200... +[2024-08-16 11:32:54,971][00349] Num frames 2300... +[2024-08-16 11:32:55,094][00349] Num frames 2400... +[2024-08-16 11:32:55,213][00349] Num frames 2500... +[2024-08-16 11:32:55,337][00349] Num frames 2600... +[2024-08-16 11:32:55,461][00349] Num frames 2700... +[2024-08-16 11:32:55,608][00349] Num frames 2800... +[2024-08-16 11:32:55,735][00349] Num frames 2900... +[2024-08-16 11:32:55,858][00349] Num frames 3000... +[2024-08-16 11:32:55,982][00349] Num frames 3100... +[2024-08-16 11:32:56,102][00349] Num frames 3200... +[2024-08-16 11:32:56,222][00349] Num frames 3300... +[2024-08-16 11:32:56,341][00349] Num frames 3400... +[2024-08-16 11:32:56,470][00349] Num frames 3500... +[2024-08-16 11:32:56,600][00349] Num frames 3600... +[2024-08-16 11:32:56,733][00349] Num frames 3700... +[2024-08-16 11:32:56,787][00349] Avg episode rewards: #0: 27.000, true rewards: #0: 12.333 +[2024-08-16 11:32:56,789][00349] Avg episode reward: 27.000, avg true_objective: 12.333 +[2024-08-16 11:32:56,909][00349] Num frames 3800... +[2024-08-16 11:32:57,031][00349] Num frames 3900... +[2024-08-16 11:32:57,155][00349] Num frames 4000... +[2024-08-16 11:32:57,275][00349] Num frames 4100... +[2024-08-16 11:32:57,425][00349] Num frames 4200... +[2024-08-16 11:32:57,626][00349] Num frames 4300... +[2024-08-16 11:32:57,801][00349] Num frames 4400... +[2024-08-16 11:32:57,972][00349] Num frames 4500... +[2024-08-16 11:32:58,141][00349] Num frames 4600... +[2024-08-16 11:32:58,320][00349] Num frames 4700... +[2024-08-16 11:32:58,481][00349] Num frames 4800... +[2024-08-16 11:32:58,658][00349] Num frames 4900... +[2024-08-16 11:32:58,839][00349] Num frames 5000... +[2024-08-16 11:32:59,007][00349] Num frames 5100... +[2024-08-16 11:32:59,182][00349] Num frames 5200... +[2024-08-16 11:32:59,355][00349] Num frames 5300... +[2024-08-16 11:32:59,538][00349] Num frames 5400... +[2024-08-16 11:32:59,722][00349] Num frames 5500... +[2024-08-16 11:32:59,911][00349] Num frames 5600... +[2024-08-16 11:33:00,066][00349] Num frames 5700... +[2024-08-16 11:33:00,203][00349] Avg episode rewards: #0: 33.415, true rewards: #0: 14.415 +[2024-08-16 11:33:00,204][00349] Avg episode reward: 33.415, avg true_objective: 14.415 +[2024-08-16 11:33:00,247][00349] Num frames 5800... +[2024-08-16 11:33:00,369][00349] Num frames 5900... +[2024-08-16 11:33:00,491][00349] Num frames 6000... +[2024-08-16 11:33:00,613][00349] Num frames 6100... +[2024-08-16 11:33:00,738][00349] Num frames 6200... +[2024-08-16 11:33:00,868][00349] Num frames 6300... +[2024-08-16 11:33:00,988][00349] Num frames 6400... +[2024-08-16 11:33:01,114][00349] Num frames 6500... +[2024-08-16 11:33:01,234][00349] Num frames 6600... +[2024-08-16 11:33:01,358][00349] Num frames 6700... +[2024-08-16 11:33:01,493][00349] Num frames 6800... +[2024-08-16 11:33:01,623][00349] Num frames 6900... +[2024-08-16 11:33:01,751][00349] Num frames 7000... +[2024-08-16 11:33:01,918][00349] Avg episode rewards: #0: 33.156, true rewards: #0: 14.156 +[2024-08-16 11:33:01,920][00349] Avg episode reward: 33.156, avg true_objective: 14.156 +[2024-08-16 11:33:01,950][00349] Num frames 7100... +[2024-08-16 11:33:02,070][00349] Num frames 7200... +[2024-08-16 11:33:02,197][00349] Num frames 7300... +[2024-08-16 11:33:02,315][00349] Num frames 7400... +[2024-08-16 11:33:02,435][00349] Num frames 7500... +[2024-08-16 11:33:02,555][00349] Num frames 7600... +[2024-08-16 11:33:02,683][00349] Num frames 7700... +[2024-08-16 11:33:02,807][00349] Num frames 7800... +[2024-08-16 11:33:02,936][00349] Num frames 7900... +[2024-08-16 11:33:03,058][00349] Num frames 8000... +[2024-08-16 11:33:03,136][00349] Avg episode rewards: #0: 31.361, true rewards: #0: 13.362 +[2024-08-16 11:33:03,137][00349] Avg episode reward: 31.361, avg true_objective: 13.362 +[2024-08-16 11:33:03,236][00349] Num frames 8100... +[2024-08-16 11:33:03,355][00349] Num frames 8200... +[2024-08-16 11:33:03,476][00349] Num frames 8300... +[2024-08-16 11:33:03,601][00349] Num frames 8400... +[2024-08-16 11:33:03,730][00349] Num frames 8500... +[2024-08-16 11:33:03,858][00349] Num frames 8600... +[2024-08-16 11:33:04,006][00349] Avg episode rewards: #0: 28.678, true rewards: #0: 12.393 +[2024-08-16 11:33:04,010][00349] Avg episode reward: 28.678, avg true_objective: 12.393 +[2024-08-16 11:33:04,042][00349] Num frames 8700... +[2024-08-16 11:33:04,164][00349] Num frames 8800... +[2024-08-16 11:33:04,286][00349] Num frames 8900... +[2024-08-16 11:33:04,407][00349] Num frames 9000... +[2024-08-16 11:33:04,535][00349] Num frames 9100... +[2024-08-16 11:33:04,666][00349] Num frames 9200... +[2024-08-16 11:33:04,784][00349] Num frames 9300... +[2024-08-16 11:33:04,916][00349] Num frames 9400... +[2024-08-16 11:33:05,036][00349] Num frames 9500... +[2024-08-16 11:33:05,167][00349] Num frames 9600... +[2024-08-16 11:33:05,308][00349] Num frames 9700... +[2024-08-16 11:33:05,428][00349] Num frames 9800... +[2024-08-16 11:33:05,552][00349] Num frames 9900... +[2024-08-16 11:33:05,683][00349] Num frames 10000... +[2024-08-16 11:33:05,808][00349] Num frames 10100... +[2024-08-16 11:33:05,941][00349] Num frames 10200... +[2024-08-16 11:33:06,061][00349] Num frames 10300... +[2024-08-16 11:33:06,190][00349] Num frames 10400... +[2024-08-16 11:33:06,315][00349] Num frames 10500... +[2024-08-16 11:33:06,439][00349] Num frames 10600... +[2024-08-16 11:33:06,575][00349] Num frames 10700... +[2024-08-16 11:33:06,733][00349] Avg episode rewards: #0: 31.968, true rewards: #0: 13.469 +[2024-08-16 11:33:06,734][00349] Avg episode reward: 31.968, avg true_objective: 13.469 +[2024-08-16 11:33:06,767][00349] Num frames 10800... +[2024-08-16 11:33:06,889][00349] Num frames 10900... +[2024-08-16 11:33:07,017][00349] Num frames 11000... +[2024-08-16 11:33:07,142][00349] Num frames 11100... +[2024-08-16 11:33:07,264][00349] Num frames 11200... +[2024-08-16 11:33:07,384][00349] Num frames 11300... +[2024-08-16 11:33:07,502][00349] Avg episode rewards: #0: 29.501, true rewards: #0: 12.612 +[2024-08-16 11:33:07,504][00349] Avg episode reward: 29.501, avg true_objective: 12.612 +[2024-08-16 11:33:07,568][00349] Num frames 11400... +[2024-08-16 11:33:07,703][00349] Num frames 11500... +[2024-08-16 11:33:07,825][00349] Num frames 11600... +[2024-08-16 11:33:07,956][00349] Num frames 11700... +[2024-08-16 11:33:08,077][00349] Num frames 11800... +[2024-08-16 11:33:08,200][00349] Num frames 11900... +[2024-08-16 11:33:08,320][00349] Num frames 12000... +[2024-08-16 11:33:08,404][00349] Avg episode rewards: #0: 27.723, true rewards: #0: 12.023 +[2024-08-16 11:33:08,406][00349] Avg episode reward: 27.723, avg true_objective: 12.023 +[2024-08-16 11:34:24,083][00349] Replay video saved to /content/train_dir/default_experiment/replay.mp4!