RamonAnkersmit's picture
Upload . with huggingface_hub
cd6f6a0
[2023-02-22 19:57:11,336][01716] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-02-22 19:57:11,340][01716] Rollout worker 0 uses device cpu
[2023-02-22 19:57:11,342][01716] Rollout worker 1 uses device cpu
[2023-02-22 19:57:11,345][01716] Rollout worker 2 uses device cpu
[2023-02-22 19:57:11,347][01716] Rollout worker 3 uses device cpu
[2023-02-22 19:57:11,350][01716] Rollout worker 4 uses device cpu
[2023-02-22 19:57:11,351][01716] Rollout worker 5 uses device cpu
[2023-02-22 19:57:11,352][01716] Rollout worker 6 uses device cpu
[2023-02-22 19:57:11,353][01716] Rollout worker 7 uses device cpu
[2023-02-22 19:57:11,526][01716] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-22 19:57:11,529][01716] InferenceWorker_p0-w0: min num requests: 2
[2023-02-22 19:57:11,560][01716] Starting all processes...
[2023-02-22 19:57:11,561][01716] Starting process learner_proc0
[2023-02-22 19:57:11,614][01716] Starting all processes...
[2023-02-22 19:57:11,622][01716] Starting process inference_proc0-0
[2023-02-22 19:57:11,622][01716] Starting process rollout_proc0
[2023-02-22 19:57:11,624][01716] Starting process rollout_proc1
[2023-02-22 19:57:11,624][01716] Starting process rollout_proc2
[2023-02-22 19:57:11,624][01716] Starting process rollout_proc3
[2023-02-22 19:57:11,624][01716] Starting process rollout_proc4
[2023-02-22 19:57:11,624][01716] Starting process rollout_proc5
[2023-02-22 19:57:11,625][01716] Starting process rollout_proc6
[2023-02-22 19:57:11,625][01716] Starting process rollout_proc7
[2023-02-22 19:57:22,769][12913] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-22 19:57:22,769][12913] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-02-22 19:57:23,030][12927] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-22 19:57:23,033][12927] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-02-22 19:57:23,052][12930] Worker 2 uses CPU cores [0]
[2023-02-22 19:57:23,082][12934] Worker 6 uses CPU cores [0]
[2023-02-22 19:57:23,085][12933] Worker 5 uses CPU cores [1]
[2023-02-22 19:57:23,097][12931] Worker 3 uses CPU cores [1]
[2023-02-22 19:57:23,106][12928] Worker 0 uses CPU cores [0]
[2023-02-22 19:57:23,126][12929] Worker 1 uses CPU cores [1]
[2023-02-22 19:57:23,236][12932] Worker 4 uses CPU cores [0]
[2023-02-22 19:57:23,277][12935] Worker 7 uses CPU cores [1]
[2023-02-22 19:57:23,587][12913] Num visible devices: 1
[2023-02-22 19:57:23,587][12927] Num visible devices: 1
[2023-02-22 19:57:23,598][12913] Starting seed is not provided
[2023-02-22 19:57:23,599][12913] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-22 19:57:23,599][12913] Initializing actor-critic model on device cuda:0
[2023-02-22 19:57:23,599][12913] RunningMeanStd input shape: (3, 72, 128)
[2023-02-22 19:57:23,601][12913] RunningMeanStd input shape: (1,)
[2023-02-22 19:57:23,614][12913] ConvEncoder: input_channels=3
[2023-02-22 19:57:23,856][12913] Conv encoder output size: 512
[2023-02-22 19:57:23,856][12913] Policy head output size: 512
[2023-02-22 19:57:23,896][12913] Created Actor Critic model with architecture:
[2023-02-22 19:57:23,896][12913] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-02-22 19:57:30,005][12913] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-02-22 19:57:30,007][12913] No checkpoints found
[2023-02-22 19:57:30,007][12913] Did not load from checkpoint, starting from scratch!
[2023-02-22 19:57:30,008][12913] Initialized policy 0 weights for model version 0
[2023-02-22 19:57:30,012][12913] LearnerWorker_p0 finished initialization!
[2023-02-22 19:57:30,014][12913] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-22 19:57:30,215][12927] RunningMeanStd input shape: (3, 72, 128)
[2023-02-22 19:57:30,216][12927] RunningMeanStd input shape: (1,)
[2023-02-22 19:57:30,228][12927] ConvEncoder: input_channels=3
[2023-02-22 19:57:30,322][12927] Conv encoder output size: 512
[2023-02-22 19:57:30,323][12927] Policy head output size: 512
[2023-02-22 19:57:31,518][01716] Heartbeat connected on Batcher_0
[2023-02-22 19:57:31,521][01716] Heartbeat connected on LearnerWorker_p0
[2023-02-22 19:57:31,537][01716] Heartbeat connected on RolloutWorker_w0
[2023-02-22 19:57:31,547][01716] Heartbeat connected on RolloutWorker_w2
[2023-02-22 19:57:31,549][01716] Heartbeat connected on RolloutWorker_w3
[2023-02-22 19:57:31,557][01716] Heartbeat connected on RolloutWorker_w1
[2023-02-22 19:57:31,559][01716] Heartbeat connected on RolloutWorker_w6
[2023-02-22 19:57:31,560][01716] Heartbeat connected on RolloutWorker_w4
[2023-02-22 19:57:31,566][01716] Heartbeat connected on RolloutWorker_w7
[2023-02-22 19:57:31,567][01716] Heartbeat connected on RolloutWorker_w5
[2023-02-22 19:57:31,857][01716] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-22 19:57:33,433][01716] Inference worker 0-0 is ready!
[2023-02-22 19:57:33,438][01716] All inference workers are ready! Signal rollout workers to start!
[2023-02-22 19:57:33,442][01716] Heartbeat connected on InferenceWorker_p0-w0
[2023-02-22 19:57:33,574][12932] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,584][12930] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,589][12935] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,591][12933] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,621][12929] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,708][12931] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,780][12928] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:33,770][12934] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 19:57:35,132][12932] Decorrelating experience for 0 frames...
[2023-02-22 19:57:35,153][12934] Decorrelating experience for 0 frames...
[2023-02-22 19:57:35,206][12933] Decorrelating experience for 0 frames...
[2023-02-22 19:57:35,211][12935] Decorrelating experience for 0 frames...
[2023-02-22 19:57:35,221][12929] Decorrelating experience for 0 frames...
[2023-02-22 19:57:35,280][12931] Decorrelating experience for 0 frames...
[2023-02-22 19:57:36,015][12929] Decorrelating experience for 32 frames...
[2023-02-22 19:57:36,071][12935] Decorrelating experience for 32 frames...
[2023-02-22 19:57:36,248][12934] Decorrelating experience for 32 frames...
[2023-02-22 19:57:36,290][12932] Decorrelating experience for 32 frames...
[2023-02-22 19:57:36,341][12928] Decorrelating experience for 0 frames...
[2023-02-22 19:57:36,857][01716] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-22 19:57:37,008][12931] Decorrelating experience for 32 frames...
[2023-02-22 19:57:37,098][12930] Decorrelating experience for 0 frames...
[2023-02-22 19:57:37,129][12934] Decorrelating experience for 64 frames...
[2023-02-22 19:57:37,288][12935] Decorrelating experience for 64 frames...
[2023-02-22 19:57:37,414][12933] Decorrelating experience for 32 frames...
[2023-02-22 19:57:37,846][12930] Decorrelating experience for 32 frames...
[2023-02-22 19:57:37,892][12932] Decorrelating experience for 64 frames...
[2023-02-22 19:57:37,902][12929] Decorrelating experience for 64 frames...
[2023-02-22 19:57:38,125][12933] Decorrelating experience for 64 frames...
[2023-02-22 19:57:38,681][12934] Decorrelating experience for 96 frames...
[2023-02-22 19:57:38,687][12928] Decorrelating experience for 32 frames...
[2023-02-22 19:57:38,996][12932] Decorrelating experience for 96 frames...
[2023-02-22 19:57:39,453][12933] Decorrelating experience for 96 frames...
[2023-02-22 19:57:39,477][12929] Decorrelating experience for 96 frames...
[2023-02-22 19:57:39,743][12928] Decorrelating experience for 64 frames...
[2023-02-22 19:57:40,006][12931] Decorrelating experience for 64 frames...
[2023-02-22 19:57:40,640][12930] Decorrelating experience for 64 frames...
[2023-02-22 19:57:40,780][12935] Decorrelating experience for 96 frames...
[2023-02-22 19:57:40,813][12931] Decorrelating experience for 96 frames...
[2023-02-22 19:57:40,958][12928] Decorrelating experience for 96 frames...
[2023-02-22 19:57:41,327][12930] Decorrelating experience for 96 frames...
[2023-02-22 19:57:41,857][01716] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-22 19:57:45,150][12913] Signal inference workers to stop experience collection...
[2023-02-22 19:57:45,167][12927] InferenceWorker_p0-w0: stopping experience collection
[2023-02-22 19:57:46,857][01716] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 166.4. Samples: 2496. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-22 19:57:46,863][01716] Avg episode reward: [(0, '2.010')]
[2023-02-22 19:57:47,606][12913] Signal inference workers to resume experience collection...
[2023-02-22 19:57:47,607][12927] InferenceWorker_p0-w0: resuming experience collection
[2023-02-22 19:57:51,857][01716] Fps is (10 sec: 1228.8, 60 sec: 614.4, 300 sec: 614.4). Total num frames: 12288. Throughput: 0: 221.2. Samples: 4424. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0)
[2023-02-22 19:57:51,864][01716] Avg episode reward: [(0, '3.018')]
[2023-02-22 19:57:56,857][01716] Fps is (10 sec: 3277.0, 60 sec: 1310.7, 300 sec: 1310.7). Total num frames: 32768. Throughput: 0: 278.3. Samples: 6958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 19:57:56,864][01716] Avg episode reward: [(0, '4.035')]
[2023-02-22 19:57:57,816][12927] Updated weights for policy 0, policy_version 10 (0.0019)
[2023-02-22 19:58:01,857][01716] Fps is (10 sec: 4505.6, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 57344. Throughput: 0: 463.4. Samples: 13902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:58:01,865][01716] Avg episode reward: [(0, '4.592')]
[2023-02-22 19:58:06,859][01716] Fps is (10 sec: 4095.0, 60 sec: 2106.4, 300 sec: 2106.4). Total num frames: 73728. Throughput: 0: 562.0. Samples: 19670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 19:58:06,862][01716] Avg episode reward: [(0, '4.382')]
[2023-02-22 19:58:08,396][12927] Updated weights for policy 0, policy_version 20 (0.0019)
[2023-02-22 19:58:11,857][01716] Fps is (10 sec: 3276.7, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 90112. Throughput: 0: 546.2. Samples: 21848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:58:11,861][01716] Avg episode reward: [(0, '4.256')]
[2023-02-22 19:58:16,857][01716] Fps is (10 sec: 3687.3, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 110592. Throughput: 0: 602.0. Samples: 27088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 19:58:16,863][01716] Avg episode reward: [(0, '4.239')]
[2023-02-22 19:58:16,874][12913] Saving new best policy, reward=4.239!
[2023-02-22 19:58:19,374][12927] Updated weights for policy 0, policy_version 30 (0.0026)
[2023-02-22 19:58:21,857][01716] Fps is (10 sec: 4096.2, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 131072. Throughput: 0: 742.4. Samples: 33406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 19:58:21,863][01716] Avg episode reward: [(0, '4.366')]
[2023-02-22 19:58:21,866][12913] Saving new best policy, reward=4.366!
[2023-02-22 19:58:26,861][01716] Fps is (10 sec: 3684.8, 60 sec: 2680.8, 300 sec: 2680.8). Total num frames: 147456. Throughput: 0: 807.0. Samples: 36318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 19:58:26,864][01716] Avg episode reward: [(0, '4.363')]
[2023-02-22 19:58:31,850][12927] Updated weights for policy 0, policy_version 40 (0.0017)
[2023-02-22 19:58:31,857][01716] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 163840. Throughput: 0: 846.6. Samples: 40594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 19:58:31,859][01716] Avg episode reward: [(0, '4.370')]
[2023-02-22 19:58:31,870][12913] Saving new best policy, reward=4.370!
[2023-02-22 19:58:36,857][01716] Fps is (10 sec: 3278.2, 60 sec: 3003.7, 300 sec: 2772.7). Total num frames: 180224. Throughput: 0: 926.3. Samples: 46106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 19:58:36,860][01716] Avg episode reward: [(0, '4.387')]
[2023-02-22 19:58:36,921][12913] Saving new best policy, reward=4.387!
[2023-02-22 19:58:41,314][12927] Updated weights for policy 0, policy_version 50 (0.0028)
[2023-02-22 19:58:41,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 2925.7). Total num frames: 204800. Throughput: 0: 947.1. Samples: 49576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 19:58:41,862][01716] Avg episode reward: [(0, '4.614')]
[2023-02-22 19:58:41,864][12913] Saving new best policy, reward=4.614!
[2023-02-22 19:58:46,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3003.7). Total num frames: 225280. Throughput: 0: 930.1. Samples: 55758. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 19:58:46,862][01716] Avg episode reward: [(0, '4.585')]
[2023-02-22 19:58:51,857][01716] Fps is (10 sec: 3276.6, 60 sec: 3754.6, 300 sec: 2969.6). Total num frames: 237568. Throughput: 0: 904.8. Samples: 60382. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 19:58:51,860][01716] Avg episode reward: [(0, '4.465')]
[2023-02-22 19:58:53,356][12927] Updated weights for policy 0, policy_version 60 (0.0033)
[2023-02-22 19:58:56,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3084.1). Total num frames: 262144. Throughput: 0: 918.6. Samples: 63186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 19:58:56,864][01716] Avg episode reward: [(0, '4.355')]
[2023-02-22 19:59:01,857][01716] Fps is (10 sec: 4505.8, 60 sec: 3754.7, 300 sec: 3140.3). Total num frames: 282624. Throughput: 0: 962.3. Samples: 70392. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 19:59:01,862][01716] Avg episode reward: [(0, '4.362')]
[2023-02-22 19:59:01,907][12927] Updated weights for policy 0, policy_version 70 (0.0023)
[2023-02-22 19:59:06,862][01716] Fps is (10 sec: 4093.7, 60 sec: 3822.7, 300 sec: 3190.4). Total num frames: 303104. Throughput: 0: 952.9. Samples: 76290. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 19:59:06,864][01716] Avg episode reward: [(0, '4.541')]
[2023-02-22 19:59:06,879][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000074_303104.pth...
[2023-02-22 19:59:11,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3153.9). Total num frames: 315392. Throughput: 0: 937.3. Samples: 78492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:11,864][01716] Avg episode reward: [(0, '4.520')]
[2023-02-22 19:59:14,092][12927] Updated weights for policy 0, policy_version 80 (0.0019)
[2023-02-22 19:59:16,857][01716] Fps is (10 sec: 3688.4, 60 sec: 3822.9, 300 sec: 3237.8). Total num frames: 339968. Throughput: 0: 969.3. Samples: 84212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:16,859][01716] Avg episode reward: [(0, '4.570')]
[2023-02-22 19:59:21,857][01716] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3314.0). Total num frames: 364544. Throughput: 0: 1005.3. Samples: 91344. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:21,859][01716] Avg episode reward: [(0, '4.451')]
[2023-02-22 19:59:22,640][12927] Updated weights for policy 0, policy_version 90 (0.0016)
[2023-02-22 19:59:26,864][01716] Fps is (10 sec: 4092.9, 60 sec: 3891.0, 300 sec: 3312.2). Total num frames: 380928. Throughput: 0: 992.3. Samples: 94236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:26,869][01716] Avg episode reward: [(0, '4.354')]
[2023-02-22 19:59:31,857][01716] Fps is (10 sec: 2867.0, 60 sec: 3822.9, 300 sec: 3276.8). Total num frames: 393216. Throughput: 0: 954.7. Samples: 98722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 19:59:31,865][01716] Avg episode reward: [(0, '4.455')]
[2023-02-22 19:59:34,857][12927] Updated weights for policy 0, policy_version 100 (0.0016)
[2023-02-22 19:59:36,857][01716] Fps is (10 sec: 3689.2, 60 sec: 3959.5, 300 sec: 3342.3). Total num frames: 417792. Throughput: 0: 991.9. Samples: 105016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:36,862][01716] Avg episode reward: [(0, '4.453')]
[2023-02-22 19:59:41,857][01716] Fps is (10 sec: 4915.5, 60 sec: 3959.5, 300 sec: 3402.8). Total num frames: 442368. Throughput: 0: 1009.1. Samples: 108594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 19:59:41,864][01716] Avg episode reward: [(0, '4.477')]
[2023-02-22 19:59:43,499][12927] Updated weights for policy 0, policy_version 110 (0.0014)
[2023-02-22 19:59:46,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3398.2). Total num frames: 458752. Throughput: 0: 979.2. Samples: 114458. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 19:59:46,865][01716] Avg episode reward: [(0, '4.637')]
[2023-02-22 19:59:46,873][12913] Saving new best policy, reward=4.637!
[2023-02-22 19:59:51,858][01716] Fps is (10 sec: 2866.8, 60 sec: 3891.1, 300 sec: 3364.5). Total num frames: 471040. Throughput: 0: 946.9. Samples: 118896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:51,862][01716] Avg episode reward: [(0, '4.641')]
[2023-02-22 19:59:51,870][12913] Saving new best policy, reward=4.641!
[2023-02-22 19:59:55,645][12927] Updated weights for policy 0, policy_version 120 (0.0020)
[2023-02-22 19:59:56,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3418.0). Total num frames: 495616. Throughput: 0: 968.3. Samples: 122064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 19:59:56,864][01716] Avg episode reward: [(0, '4.445')]
[2023-02-22 20:00:01,859][01716] Fps is (10 sec: 4505.1, 60 sec: 3891.0, 300 sec: 3440.6). Total num frames: 516096. Throughput: 0: 990.1. Samples: 128768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:00:01,862][01716] Avg episode reward: [(0, '4.583')]
[2023-02-22 20:00:06,866][01716] Fps is (10 sec: 3273.7, 60 sec: 3754.4, 300 sec: 3408.7). Total num frames: 528384. Throughput: 0: 916.6. Samples: 132602. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:00:06,874][01716] Avg episode reward: [(0, '4.801')]
[2023-02-22 20:00:06,889][12913] Saving new best policy, reward=4.801!
[2023-02-22 20:00:08,472][12927] Updated weights for policy 0, policy_version 130 (0.0031)
[2023-02-22 20:00:11,857][01716] Fps is (10 sec: 2048.5, 60 sec: 3686.4, 300 sec: 3353.6). Total num frames: 536576. Throughput: 0: 890.9. Samples: 134322. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:00:11,862][01716] Avg episode reward: [(0, '4.727')]
[2023-02-22 20:00:16,857][01716] Fps is (10 sec: 2869.9, 60 sec: 3618.1, 300 sec: 3376.1). Total num frames: 557056. Throughput: 0: 886.4. Samples: 138608. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:00:16,858][01716] Avg episode reward: [(0, '4.607')]
[2023-02-22 20:00:19,995][12927] Updated weights for policy 0, policy_version 140 (0.0016)
[2023-02-22 20:00:21,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3421.4). Total num frames: 581632. Throughput: 0: 904.8. Samples: 145730. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:00:21,863][01716] Avg episode reward: [(0, '4.593')]
[2023-02-22 20:00:26,857][01716] Fps is (10 sec: 4505.5, 60 sec: 3686.9, 300 sec: 3440.6). Total num frames: 602112. Throughput: 0: 903.7. Samples: 149262. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:00:26,861][01716] Avg episode reward: [(0, '4.474')]
[2023-02-22 20:00:30,788][12927] Updated weights for policy 0, policy_version 150 (0.0039)
[2023-02-22 20:00:31,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3413.3). Total num frames: 614400. Throughput: 0: 879.6. Samples: 154038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:00:31,863][01716] Avg episode reward: [(0, '4.326')]
[2023-02-22 20:00:36,857][01716] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3431.8). Total num frames: 634880. Throughput: 0: 898.6. Samples: 159330. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:00:36,860][01716] Avg episode reward: [(0, '4.386')]
[2023-02-22 20:00:40,849][12927] Updated weights for policy 0, policy_version 160 (0.0027)
[2023-02-22 20:00:41,857][01716] Fps is (10 sec: 4505.5, 60 sec: 3618.1, 300 sec: 3470.8). Total num frames: 659456. Throughput: 0: 907.2. Samples: 162886. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:00:41,860][01716] Avg episode reward: [(0, '4.530')]
[2023-02-22 20:00:46,857][01716] Fps is (10 sec: 4505.4, 60 sec: 3686.4, 300 sec: 3486.8). Total num frames: 679936. Throughput: 0: 910.2. Samples: 169726. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 20:00:46,862][01716] Avg episode reward: [(0, '4.447')]
[2023-02-22 20:00:51,857][01716] Fps is (10 sec: 3276.9, 60 sec: 3686.5, 300 sec: 3461.1). Total num frames: 692224. Throughput: 0: 925.6. Samples: 174244. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:00:51,860][01716] Avg episode reward: [(0, '4.346')]
[2023-02-22 20:00:52,111][12927] Updated weights for policy 0, policy_version 170 (0.0020)
[2023-02-22 20:00:56,857][01716] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3476.6). Total num frames: 712704. Throughput: 0: 938.3. Samples: 176544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:00:56,865][01716] Avg episode reward: [(0, '4.457')]
[2023-02-22 20:01:01,677][12927] Updated weights for policy 0, policy_version 180 (0.0026)
[2023-02-22 20:01:01,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3686.6, 300 sec: 3510.9). Total num frames: 737280. Throughput: 0: 997.3. Samples: 183488. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 20:01:01,864][01716] Avg episode reward: [(0, '4.524')]
[2023-02-22 20:01:06,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3823.5, 300 sec: 3524.5). Total num frames: 757760. Throughput: 0: 982.3. Samples: 189934. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:01:06,861][01716] Avg episode reward: [(0, '4.565')]
[2023-02-22 20:01:06,870][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000185_757760.pth...
[2023-02-22 20:01:11,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3500.2). Total num frames: 770048. Throughput: 0: 953.6. Samples: 192172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:01:11,863][01716] Avg episode reward: [(0, '4.733')]
[2023-02-22 20:01:13,134][12927] Updated weights for policy 0, policy_version 190 (0.0018)
[2023-02-22 20:01:16,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3513.5). Total num frames: 790528. Throughput: 0: 960.7. Samples: 197268. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:01:16,860][01716] Avg episode reward: [(0, '4.708')]
[2023-02-22 20:01:21,857][01716] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3543.9). Total num frames: 815104. Throughput: 0: 1001.7. Samples: 204408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:01:21,859][01716] Avg episode reward: [(0, '4.692')]
[2023-02-22 20:01:22,104][12927] Updated weights for policy 0, policy_version 200 (0.0014)
[2023-02-22 20:01:26,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3555.7). Total num frames: 835584. Throughput: 0: 1002.7. Samples: 208006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:01:26,865][01716] Avg episode reward: [(0, '4.625')]
[2023-02-22 20:01:31,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3549.9). Total num frames: 851968. Throughput: 0: 950.6. Samples: 212502. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:01:31,860][01716] Avg episode reward: [(0, '4.568')]
[2023-02-22 20:01:34,159][12927] Updated weights for policy 0, policy_version 210 (0.0011)
[2023-02-22 20:01:36,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3561.0). Total num frames: 872448. Throughput: 0: 974.4. Samples: 218092. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:01:36,864][01716] Avg episode reward: [(0, '4.633')]
[2023-02-22 20:01:41,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3571.7). Total num frames: 892928. Throughput: 0: 1003.3. Samples: 221694. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:01:41,859][01716] Avg episode reward: [(0, '4.538')]
[2023-02-22 20:01:42,804][12927] Updated weights for policy 0, policy_version 220 (0.0015)
[2023-02-22 20:01:46,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3582.0). Total num frames: 913408. Throughput: 0: 994.7. Samples: 228250. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:01:46,862][01716] Avg episode reward: [(0, '4.548')]
[2023-02-22 20:01:51,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3576.1). Total num frames: 929792. Throughput: 0: 952.9. Samples: 232814. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:01:51,863][01716] Avg episode reward: [(0, '4.538')]
[2023-02-22 20:01:55,043][12927] Updated weights for policy 0, policy_version 230 (0.0028)
[2023-02-22 20:01:56,858][01716] Fps is (10 sec: 3686.1, 60 sec: 3959.4, 300 sec: 3585.9). Total num frames: 950272. Throughput: 0: 957.6. Samples: 235264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:01:56,863][01716] Avg episode reward: [(0, '4.667')]
[2023-02-22 20:02:01,857][01716] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3610.5). Total num frames: 974848. Throughput: 0: 1004.2. Samples: 242456. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:02:01,860][01716] Avg episode reward: [(0, '4.498')]
[2023-02-22 20:02:03,507][12927] Updated weights for policy 0, policy_version 240 (0.0019)
[2023-02-22 20:02:06,857][01716] Fps is (10 sec: 4096.3, 60 sec: 3891.2, 300 sec: 3604.5). Total num frames: 991232. Throughput: 0: 981.6. Samples: 248578. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:02:06,860][01716] Avg episode reward: [(0, '4.598')]
[2023-02-22 20:02:11,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3598.6). Total num frames: 1007616. Throughput: 0: 952.5. Samples: 250870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:02:11,865][01716] Avg episode reward: [(0, '4.599')]
[2023-02-22 20:02:15,814][12927] Updated weights for policy 0, policy_version 250 (0.0031)
[2023-02-22 20:02:16,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3607.4). Total num frames: 1028096. Throughput: 0: 968.6. Samples: 256090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:02:16,863][01716] Avg episode reward: [(0, '4.742')]
[2023-02-22 20:02:21,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3629.9). Total num frames: 1052672. Throughput: 0: 1004.7. Samples: 263304. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:02:21,859][01716] Avg episode reward: [(0, '4.589')]
[2023-02-22 20:02:24,790][12927] Updated weights for policy 0, policy_version 260 (0.0023)
[2023-02-22 20:02:26,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3623.9). Total num frames: 1069056. Throughput: 0: 997.0. Samples: 266558. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:02:26,863][01716] Avg episode reward: [(0, '4.588')]
[2023-02-22 20:02:31,858][01716] Fps is (10 sec: 3276.4, 60 sec: 3891.1, 300 sec: 3679.4). Total num frames: 1085440. Throughput: 0: 952.7. Samples: 271124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:02:31,862][01716] Avg episode reward: [(0, '4.768')]
[2023-02-22 20:02:36,569][12927] Updated weights for policy 0, policy_version 270 (0.0027)
[2023-02-22 20:02:36,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1105920. Throughput: 0: 983.0. Samples: 277050. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:02:36,865][01716] Avg episode reward: [(0, '4.678')]
[2023-02-22 20:02:41,857][01716] Fps is (10 sec: 4506.2, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 1130496. Throughput: 0: 1007.8. Samples: 280616. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:02:41,864][01716] Avg episode reward: [(0, '4.758')]
[2023-02-22 20:02:46,042][12927] Updated weights for policy 0, policy_version 280 (0.0011)
[2023-02-22 20:02:46,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1146880. Throughput: 0: 986.8. Samples: 286864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:02:46,861][01716] Avg episode reward: [(0, '4.813')]
[2023-02-22 20:02:46,883][12913] Saving new best policy, reward=4.813!
[2023-02-22 20:02:51,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1163264. Throughput: 0: 949.2. Samples: 291290. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:02:51,867][01716] Avg episode reward: [(0, '5.042')]
[2023-02-22 20:02:51,870][12913] Saving new best policy, reward=5.042!
[2023-02-22 20:02:56,859][01716] Fps is (10 sec: 3685.4, 60 sec: 3891.1, 300 sec: 3818.3). Total num frames: 1183744. Throughput: 0: 958.3. Samples: 293998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:02:56,863][01716] Avg episode reward: [(0, '5.062')]
[2023-02-22 20:02:56,878][12913] Saving new best policy, reward=5.062!
[2023-02-22 20:02:57,631][12927] Updated weights for policy 0, policy_version 290 (0.0017)
[2023-02-22 20:03:01,857][01716] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1204224. Throughput: 0: 999.5. Samples: 301068. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:03:01,859][01716] Avg episode reward: [(0, '5.075')]
[2023-02-22 20:03:01,869][12913] Saving new best policy, reward=5.075!
[2023-02-22 20:03:06,857][01716] Fps is (10 sec: 4097.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1224704. Throughput: 0: 964.4. Samples: 306700. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:03:06,864][01716] Avg episode reward: [(0, '5.278')]
[2023-02-22 20:03:06,879][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000299_1224704.pth...
[2023-02-22 20:03:07,102][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000074_303104.pth
[2023-02-22 20:03:07,121][12913] Saving new best policy, reward=5.278!
[2023-02-22 20:03:07,992][12927] Updated weights for policy 0, policy_version 300 (0.0022)
[2023-02-22 20:03:11,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1236992. Throughput: 0: 939.1. Samples: 308816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:03:11,860][01716] Avg episode reward: [(0, '5.288')]
[2023-02-22 20:03:11,867][12913] Saving new best policy, reward=5.288!
[2023-02-22 20:03:16,857][01716] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1257472. Throughput: 0: 958.2. Samples: 314240. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:03:16,860][01716] Avg episode reward: [(0, '5.151')]
[2023-02-22 20:03:18,551][12927] Updated weights for policy 0, policy_version 310 (0.0016)
[2023-02-22 20:03:21,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1282048. Throughput: 0: 984.5. Samples: 321354. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:03:21,859][01716] Avg episode reward: [(0, '5.144')]
[2023-02-22 20:03:26,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1298432. Throughput: 0: 970.7. Samples: 324296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:03:26,867][01716] Avg episode reward: [(0, '5.168')]
[2023-02-22 20:03:29,896][12927] Updated weights for policy 0, policy_version 320 (0.0020)
[2023-02-22 20:03:31,858][01716] Fps is (10 sec: 3276.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1314816. Throughput: 0: 930.2. Samples: 328724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:03:31,860][01716] Avg episode reward: [(0, '5.225')]
[2023-02-22 20:03:36,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1335296. Throughput: 0: 966.4. Samples: 334780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:03:36,865][01716] Avg episode reward: [(0, '5.570')]
[2023-02-22 20:03:36,874][12913] Saving new best policy, reward=5.570!
[2023-02-22 20:03:39,805][12927] Updated weights for policy 0, policy_version 330 (0.0015)
[2023-02-22 20:03:41,857][01716] Fps is (10 sec: 4506.2, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1359872. Throughput: 0: 981.4. Samples: 338158. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 20:03:41,860][01716] Avg episode reward: [(0, '5.346')]
[2023-02-22 20:03:46,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1376256. Throughput: 0: 959.5. Samples: 344246. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:03:46,859][01716] Avg episode reward: [(0, '5.277')]
[2023-02-22 20:03:51,349][12927] Updated weights for policy 0, policy_version 340 (0.0032)
[2023-02-22 20:03:51,858][01716] Fps is (10 sec: 3276.6, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1392640. Throughput: 0: 935.1. Samples: 348782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:03:51,866][01716] Avg episode reward: [(0, '5.451')]
[2023-02-22 20:03:56,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3832.2). Total num frames: 1413120. Throughput: 0: 953.3. Samples: 351714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:03:56,859][01716] Avg episode reward: [(0, '5.321')]
[2023-02-22 20:04:00,346][12927] Updated weights for policy 0, policy_version 350 (0.0035)
[2023-02-22 20:04:01,857][01716] Fps is (10 sec: 4506.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1437696. Throughput: 0: 994.0. Samples: 358968. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:04:01,859][01716] Avg episode reward: [(0, '4.903')]
[2023-02-22 20:04:06,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1454080. Throughput: 0: 962.8. Samples: 364680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:04:06,861][01716] Avg episode reward: [(0, '5.301')]
[2023-02-22 20:04:11,858][01716] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1470464. Throughput: 0: 948.2. Samples: 366966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:04:11,867][01716] Avg episode reward: [(0, '5.430')]
[2023-02-22 20:04:12,554][12927] Updated weights for policy 0, policy_version 360 (0.0021)
[2023-02-22 20:04:16,857][01716] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 1495040. Throughput: 0: 978.2. Samples: 372740. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:04:16,860][01716] Avg episode reward: [(0, '5.536')]
[2023-02-22 20:04:21,862][01716] Fps is (10 sec: 3684.5, 60 sec: 3754.3, 300 sec: 3818.3). Total num frames: 1507328. Throughput: 0: 958.3. Samples: 377908. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:04:21,865][01716] Avg episode reward: [(0, '5.483')]
[2023-02-22 20:04:23,445][12927] Updated weights for policy 0, policy_version 370 (0.0030)
[2023-02-22 20:04:26,857][01716] Fps is (10 sec: 2457.7, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 1519616. Throughput: 0: 929.2. Samples: 379974. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:04:26,863][01716] Avg episode reward: [(0, '5.464')]
[2023-02-22 20:04:31,857][01716] Fps is (10 sec: 2868.7, 60 sec: 3686.5, 300 sec: 3790.5). Total num frames: 1536000. Throughput: 0: 881.3. Samples: 383904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:04:31,859][01716] Avg episode reward: [(0, '5.284')]
[2023-02-22 20:04:36,688][12927] Updated weights for policy 0, policy_version 380 (0.0018)
[2023-02-22 20:04:36,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3776.7). Total num frames: 1556480. Throughput: 0: 905.8. Samples: 389542. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:04:36,860][01716] Avg episode reward: [(0, '5.261')]
[2023-02-22 20:04:41,857][01716] Fps is (10 sec: 4505.7, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1581056. Throughput: 0: 918.8. Samples: 393060. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:04:41,859][01716] Avg episode reward: [(0, '5.057')]
[2023-02-22 20:04:45,517][12927] Updated weights for policy 0, policy_version 390 (0.0027)
[2023-02-22 20:04:46,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 1597440. Throughput: 0: 907.7. Samples: 399814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:04:46,864][01716] Avg episode reward: [(0, '5.405')]
[2023-02-22 20:04:51,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3790.5). Total num frames: 1613824. Throughput: 0: 878.4. Samples: 404206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:04:51,862][01716] Avg episode reward: [(0, '5.819')]
[2023-02-22 20:04:51,867][12913] Saving new best policy, reward=5.819!
[2023-02-22 20:04:56,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3790.6). Total num frames: 1634304. Throughput: 0: 879.9. Samples: 406562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:04:56,859][01716] Avg episode reward: [(0, '5.804')]
[2023-02-22 20:04:57,561][12927] Updated weights for policy 0, policy_version 400 (0.0016)
[2023-02-22 20:05:01,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3818.4). Total num frames: 1654784. Throughput: 0: 909.2. Samples: 413652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:05:01,866][01716] Avg episode reward: [(0, '5.480')]
[2023-02-22 20:05:06,860][01716] Fps is (10 sec: 4094.5, 60 sec: 3686.2, 300 sec: 3859.9). Total num frames: 1675264. Throughput: 0: 930.7. Samples: 419790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:05:06,863][01716] Avg episode reward: [(0, '5.590')]
[2023-02-22 20:05:06,873][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000409_1675264.pth...
[2023-02-22 20:05:07,011][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000185_757760.pth
[2023-02-22 20:05:07,299][12927] Updated weights for policy 0, policy_version 410 (0.0017)
[2023-02-22 20:05:11,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 1691648. Throughput: 0: 932.8. Samples: 421952. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:05:11,862][01716] Avg episode reward: [(0, '5.758')]
[2023-02-22 20:05:16,857][01716] Fps is (10 sec: 3687.7, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 1712128. Throughput: 0: 962.0. Samples: 427196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:05:16,859][01716] Avg episode reward: [(0, '5.687')]
[2023-02-22 20:05:18,353][12927] Updated weights for policy 0, policy_version 420 (0.0036)
[2023-02-22 20:05:21,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3823.3, 300 sec: 3846.1). Total num frames: 1736704. Throughput: 0: 996.4. Samples: 434380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:05:21,863][01716] Avg episode reward: [(0, '6.088')]
[2023-02-22 20:05:21,868][12913] Saving new best policy, reward=6.088!
[2023-02-22 20:05:26,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1753088. Throughput: 0: 987.8. Samples: 437510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:05:26,861][01716] Avg episode reward: [(0, '6.170')]
[2023-02-22 20:05:26,875][12913] Saving new best policy, reward=6.170!
[2023-02-22 20:05:28,974][12927] Updated weights for policy 0, policy_version 430 (0.0038)
[2023-02-22 20:05:31,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1769472. Throughput: 0: 935.8. Samples: 441926. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:05:31,861][01716] Avg episode reward: [(0, '6.218')]
[2023-02-22 20:05:31,865][12913] Saving new best policy, reward=6.218!
[2023-02-22 20:05:36,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1789952. Throughput: 0: 968.9. Samples: 447806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:05:36,863][01716] Avg episode reward: [(0, '6.528')]
[2023-02-22 20:05:36,875][12913] Saving new best policy, reward=6.528!
[2023-02-22 20:05:39,165][12927] Updated weights for policy 0, policy_version 440 (0.0014)
[2023-02-22 20:05:41,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1814528. Throughput: 0: 994.7. Samples: 451324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:05:41,860][01716] Avg episode reward: [(0, '6.626')]
[2023-02-22 20:05:41,862][12913] Saving new best policy, reward=6.626!
[2023-02-22 20:05:46,861][01716] Fps is (10 sec: 4094.1, 60 sec: 3890.9, 300 sec: 3859.9). Total num frames: 1830912. Throughput: 0: 973.1. Samples: 457446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:05:46,865][01716] Avg episode reward: [(0, '7.005')]
[2023-02-22 20:05:46,885][12913] Saving new best policy, reward=7.005!
[2023-02-22 20:05:50,776][12927] Updated weights for policy 0, policy_version 450 (0.0018)
[2023-02-22 20:05:51,858][01716] Fps is (10 sec: 2866.9, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1843200. Throughput: 0: 935.1. Samples: 461868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:05:51,864][01716] Avg episode reward: [(0, '6.837')]
[2023-02-22 20:05:56,857][01716] Fps is (10 sec: 3688.1, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1867776. Throughput: 0: 954.6. Samples: 464908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:05:56,864][01716] Avg episode reward: [(0, '7.549')]
[2023-02-22 20:05:56,872][12913] Saving new best policy, reward=7.549!
[2023-02-22 20:06:00,042][12927] Updated weights for policy 0, policy_version 460 (0.0018)
[2023-02-22 20:06:01,857][01716] Fps is (10 sec: 4915.7, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 1892352. Throughput: 0: 998.3. Samples: 472120. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:06:01,859][01716] Avg episode reward: [(0, '7.868')]
[2023-02-22 20:06:01,866][12913] Saving new best policy, reward=7.868!
[2023-02-22 20:06:06,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3860.0). Total num frames: 1908736. Throughput: 0: 961.3. Samples: 477638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:06:06,859][01716] Avg episode reward: [(0, '8.629')]
[2023-02-22 20:06:06,871][12913] Saving new best policy, reward=8.629!
[2023-02-22 20:06:11,857][01716] Fps is (10 sec: 2867.1, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1921024. Throughput: 0: 939.9. Samples: 479804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:06:11,863][01716] Avg episode reward: [(0, '8.430')]
[2023-02-22 20:06:12,071][12927] Updated weights for policy 0, policy_version 470 (0.0015)
[2023-02-22 20:06:16,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1945600. Throughput: 0: 975.8. Samples: 485836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:06:16,862][01716] Avg episode reward: [(0, '8.901')]
[2023-02-22 20:06:16,874][12913] Saving new best policy, reward=8.901!
[2023-02-22 20:06:20,771][12927] Updated weights for policy 0, policy_version 480 (0.0014)
[2023-02-22 20:06:21,857][01716] Fps is (10 sec: 4915.3, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1970176. Throughput: 0: 1002.2. Samples: 492906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:06:21,859][01716] Avg episode reward: [(0, '8.161')]
[2023-02-22 20:06:26,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1986560. Throughput: 0: 985.8. Samples: 495684. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:06:26,861][01716] Avg episode reward: [(0, '7.846')]
[2023-02-22 20:06:31,857][01716] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1998848. Throughput: 0: 949.8. Samples: 500184. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:06:31,863][01716] Avg episode reward: [(0, '7.436')]
[2023-02-22 20:06:32,985][12927] Updated weights for policy 0, policy_version 490 (0.0028)
[2023-02-22 20:06:36,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2023424. Throughput: 0: 993.9. Samples: 506592. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:06:36,859][01716] Avg episode reward: [(0, '7.982')]
[2023-02-22 20:06:41,420][12927] Updated weights for policy 0, policy_version 500 (0.0023)
[2023-02-22 20:06:41,857][01716] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2048000. Throughput: 0: 1008.0. Samples: 510266. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:06:41,859][01716] Avg episode reward: [(0, '8.198')]
[2023-02-22 20:06:46,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.5, 300 sec: 3846.1). Total num frames: 2064384. Throughput: 0: 978.0. Samples: 516132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:06:46,861][01716] Avg episode reward: [(0, '7.632')]
[2023-02-22 20:06:51,857][01716] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2080768. Throughput: 0: 958.3. Samples: 520760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:06:51,860][01716] Avg episode reward: [(0, '7.729')]
[2023-02-22 20:06:53,427][12927] Updated weights for policy 0, policy_version 510 (0.0029)
[2023-02-22 20:06:56,857][01716] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2105344. Throughput: 0: 984.8. Samples: 524118. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:06:56,866][01716] Avg episode reward: [(0, '8.144')]
[2023-02-22 20:07:01,857][01716] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2125824. Throughput: 0: 1010.8. Samples: 531320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:07:01,859][01716] Avg episode reward: [(0, '9.519')]
[2023-02-22 20:07:01,889][12913] Saving new best policy, reward=9.519!
[2023-02-22 20:07:01,898][12927] Updated weights for policy 0, policy_version 520 (0.0011)
[2023-02-22 20:07:06,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2142208. Throughput: 0: 969.5. Samples: 536532. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:07:06,862][01716] Avg episode reward: [(0, '9.511')]
[2023-02-22 20:07:06,878][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000523_2142208.pth...
[2023-02-22 20:07:07,012][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000299_1224704.pth
[2023-02-22 20:07:11,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2158592. Throughput: 0: 956.5. Samples: 538726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:07:11,863][01716] Avg episode reward: [(0, '9.215')]
[2023-02-22 20:07:14,101][12927] Updated weights for policy 0, policy_version 530 (0.0019)
[2023-02-22 20:07:16,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2183168. Throughput: 0: 997.0. Samples: 545050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:07:16,858][01716] Avg episode reward: [(0, '9.989')]
[2023-02-22 20:07:16,873][12913] Saving new best policy, reward=9.989!
[2023-02-22 20:07:21,857][01716] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2207744. Throughput: 0: 1010.9. Samples: 552082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:07:21,863][01716] Avg episode reward: [(0, '10.382')]
[2023-02-22 20:07:21,865][12913] Saving new best policy, reward=10.382!
[2023-02-22 20:07:23,172][12927] Updated weights for policy 0, policy_version 540 (0.0026)
[2023-02-22 20:07:26,858][01716] Fps is (10 sec: 3686.0, 60 sec: 3891.1, 300 sec: 3846.1). Total num frames: 2220032. Throughput: 0: 981.5. Samples: 554436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:07:26,866][01716] Avg episode reward: [(0, '11.182')]
[2023-02-22 20:07:26,878][12913] Saving new best policy, reward=11.182!
[2023-02-22 20:07:31,857][01716] Fps is (10 sec: 2867.2, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2236416. Throughput: 0: 950.0. Samples: 558880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:07:31,860][01716] Avg episode reward: [(0, '11.597')]
[2023-02-22 20:07:31,864][12913] Saving new best policy, reward=11.597!
[2023-02-22 20:07:34,847][12927] Updated weights for policy 0, policy_version 550 (0.0042)
[2023-02-22 20:07:36,857][01716] Fps is (10 sec: 4096.4, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2260992. Throughput: 0: 997.3. Samples: 565636. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:07:36,864][01716] Avg episode reward: [(0, '12.141')]
[2023-02-22 20:07:36,876][12913] Saving new best policy, reward=12.141!
[2023-02-22 20:07:41,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2281472. Throughput: 0: 997.0. Samples: 568982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:07:41,865][01716] Avg episode reward: [(0, '11.898')]
[2023-02-22 20:07:45,096][12927] Updated weights for policy 0, policy_version 560 (0.0017)
[2023-02-22 20:07:46,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2297856. Throughput: 0: 955.4. Samples: 574314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:07:46,861][01716] Avg episode reward: [(0, '11.506')]
[2023-02-22 20:07:51,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2314240. Throughput: 0: 941.1. Samples: 578882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:07:51,865][01716] Avg episode reward: [(0, '10.992')]
[2023-02-22 20:07:56,027][12927] Updated weights for policy 0, policy_version 570 (0.0018)
[2023-02-22 20:07:56,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2338816. Throughput: 0: 971.1. Samples: 582426. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:07:56,859][01716] Avg episode reward: [(0, '10.319')]
[2023-02-22 20:08:01,857][01716] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2359296. Throughput: 0: 989.6. Samples: 589582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:08:01,868][01716] Avg episode reward: [(0, '11.403')]
[2023-02-22 20:08:06,644][12927] Updated weights for policy 0, policy_version 580 (0.0027)
[2023-02-22 20:08:06,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2375680. Throughput: 0: 940.4. Samples: 594402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:08:06,861][01716] Avg episode reward: [(0, '11.734')]
[2023-02-22 20:08:11,857][01716] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2392064. Throughput: 0: 937.1. Samples: 596606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:08:11,867][01716] Avg episode reward: [(0, '13.304')]
[2023-02-22 20:08:11,869][12913] Saving new best policy, reward=13.304!
[2023-02-22 20:08:16,770][12927] Updated weights for policy 0, policy_version 590 (0.0025)
[2023-02-22 20:08:16,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2416640. Throughput: 0: 984.3. Samples: 603174. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 20:08:16,859][01716] Avg episode reward: [(0, '14.807')]
[2023-02-22 20:08:16,871][12913] Saving new best policy, reward=14.807!
[2023-02-22 20:08:21,857][01716] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 2433024. Throughput: 0: 972.9. Samples: 609418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:08:21,861][01716] Avg episode reward: [(0, '15.720')]
[2023-02-22 20:08:21,867][12913] Saving new best policy, reward=15.720!
[2023-02-22 20:08:26,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3846.1). Total num frames: 2449408. Throughput: 0: 945.6. Samples: 611536. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:08:26,862][01716] Avg episode reward: [(0, '15.600')]
[2023-02-22 20:08:28,876][12927] Updated weights for policy 0, policy_version 600 (0.0011)
[2023-02-22 20:08:31,857][01716] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 2461696. Throughput: 0: 926.4. Samples: 616004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:08:31,864][01716] Avg episode reward: [(0, '15.524')]
[2023-02-22 20:08:36,857][01716] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3790.5). Total num frames: 2478080. Throughput: 0: 923.0. Samples: 620416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:08:36,860][01716] Avg episode reward: [(0, '14.881')]
[2023-02-22 20:08:41,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3790.5). Total num frames: 2494464. Throughput: 0: 894.0. Samples: 622654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:08:41,862][01716] Avg episode reward: [(0, '14.256')]
[2023-02-22 20:08:41,974][12927] Updated weights for policy 0, policy_version 610 (0.0029)
[2023-02-22 20:08:46,858][01716] Fps is (10 sec: 3276.4, 60 sec: 3549.8, 300 sec: 3790.5). Total num frames: 2510848. Throughput: 0: 847.2. Samples: 627706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:08:46,863][01716] Avg episode reward: [(0, '14.240')]
[2023-02-22 20:08:51,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3776.7). Total num frames: 2527232. Throughput: 0: 845.4. Samples: 632446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:08:51,860][01716] Avg episode reward: [(0, '14.835')]
[2023-02-22 20:08:54,085][12927] Updated weights for policy 0, policy_version 620 (0.0031)
[2023-02-22 20:08:56,857][01716] Fps is (10 sec: 4096.5, 60 sec: 3549.9, 300 sec: 3776.6). Total num frames: 2551808. Throughput: 0: 875.6. Samples: 636010. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-22 20:08:56,859][01716] Avg episode reward: [(0, '15.516')]
[2023-02-22 20:09:01,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3790.5). Total num frames: 2572288. Throughput: 0: 889.8. Samples: 643214. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:09:01,863][01716] Avg episode reward: [(0, '16.221')]
[2023-02-22 20:09:01,869][12913] Saving new best policy, reward=16.221!
[2023-02-22 20:09:03,605][12927] Updated weights for policy 0, policy_version 630 (0.0022)
[2023-02-22 20:09:06,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3790.5). Total num frames: 2588672. Throughput: 0: 853.9. Samples: 647844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:06,864][01716] Avg episode reward: [(0, '16.688')]
[2023-02-22 20:09:06,877][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000632_2588672.pth...
[2023-02-22 20:09:07,020][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000409_1675264.pth
[2023-02-22 20:09:07,065][12913] Saving new best policy, reward=16.688!
[2023-02-22 20:09:11,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 2605056. Throughput: 0: 854.8. Samples: 650000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:11,858][01716] Avg episode reward: [(0, '16.753')]
[2023-02-22 20:09:11,864][12913] Saving new best policy, reward=16.753!
[2023-02-22 20:09:14,775][12927] Updated weights for policy 0, policy_version 640 (0.0025)
[2023-02-22 20:09:16,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3804.5). Total num frames: 2629632. Throughput: 0: 905.1. Samples: 656734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 20:09:16,866][01716] Avg episode reward: [(0, '17.336')]
[2023-02-22 20:09:16,877][12913] Saving new best policy, reward=17.336!
[2023-02-22 20:09:21,857][01716] Fps is (10 sec: 4505.5, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 2650112. Throughput: 0: 956.8. Samples: 663474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:21,865][01716] Avg episode reward: [(0, '16.831')]
[2023-02-22 20:09:24,970][12927] Updated weights for policy 0, policy_version 650 (0.0019)
[2023-02-22 20:09:26,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 2666496. Throughput: 0: 958.2. Samples: 665772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:26,866][01716] Avg episode reward: [(0, '16.557')]
[2023-02-22 20:09:31,857][01716] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 2682880. Throughput: 0: 950.9. Samples: 670494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:31,859][01716] Avg episode reward: [(0, '16.353')]
[2023-02-22 20:09:35,321][12927] Updated weights for policy 0, policy_version 660 (0.0012)
[2023-02-22 20:09:36,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2707456. Throughput: 0: 1004.9. Samples: 677668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:36,860][01716] Avg episode reward: [(0, '16.769')]
[2023-02-22 20:09:41,857][01716] Fps is (10 sec: 4505.2, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2727936. Throughput: 0: 1005.3. Samples: 681248. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:41,865][01716] Avg episode reward: [(0, '17.437')]
[2023-02-22 20:09:41,870][12913] Saving new best policy, reward=17.437!
[2023-02-22 20:09:46,415][12927] Updated weights for policy 0, policy_version 670 (0.0039)
[2023-02-22 20:09:46,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3832.2). Total num frames: 2744320. Throughput: 0: 948.5. Samples: 685896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:09:46,862][01716] Avg episode reward: [(0, '18.076')]
[2023-02-22 20:09:46,874][12913] Saving new best policy, reward=18.076!
[2023-02-22 20:09:51,857][01716] Fps is (10 sec: 3277.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2760704. Throughput: 0: 962.6. Samples: 691162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:09:51,862][01716] Avg episode reward: [(0, '18.744')]
[2023-02-22 20:09:51,925][12913] Saving new best policy, reward=18.744!
[2023-02-22 20:09:56,383][12927] Updated weights for policy 0, policy_version 680 (0.0033)
[2023-02-22 20:09:56,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2785280. Throughput: 0: 991.9. Samples: 694636. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:09:56,859][01716] Avg episode reward: [(0, '18.962')]
[2023-02-22 20:09:56,871][12913] Saving new best policy, reward=18.962!
[2023-02-22 20:10:01,858][01716] Fps is (10 sec: 4504.9, 60 sec: 3891.1, 300 sec: 3832.2). Total num frames: 2805760. Throughput: 0: 992.6. Samples: 701402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:10:01,861][01716] Avg episode reward: [(0, '18.161')]
[2023-02-22 20:10:06,859][01716] Fps is (10 sec: 3685.5, 60 sec: 3891.0, 300 sec: 3832.2). Total num frames: 2822144. Throughput: 0: 943.1. Samples: 705914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:10:06,864][01716] Avg episode reward: [(0, '18.035')]
[2023-02-22 20:10:08,222][12927] Updated weights for policy 0, policy_version 690 (0.0027)
[2023-02-22 20:10:11,861][01716] Fps is (10 sec: 3275.8, 60 sec: 3890.9, 300 sec: 3818.2). Total num frames: 2838528. Throughput: 0: 941.7. Samples: 708154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:10:11,868][01716] Avg episode reward: [(0, '17.014')]
[2023-02-22 20:10:16,857][01716] Fps is (10 sec: 4097.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2863104. Throughput: 0: 995.9. Samples: 715310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:10:16,865][01716] Avg episode reward: [(0, '16.914')]
[2023-02-22 20:10:17,098][12927] Updated weights for policy 0, policy_version 700 (0.0021)
[2023-02-22 20:10:21,857][01716] Fps is (10 sec: 4507.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2883584. Throughput: 0: 977.7. Samples: 721666. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:10:21,865][01716] Avg episode reward: [(0, '16.700')]
[2023-02-22 20:10:26,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2899968. Throughput: 0: 949.2. Samples: 723960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:10:26,860][01716] Avg episode reward: [(0, '15.498')]
[2023-02-22 20:10:29,310][12927] Updated weights for policy 0, policy_version 710 (0.0057)
[2023-02-22 20:10:31,857][01716] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2920448. Throughput: 0: 957.9. Samples: 729002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:10:31,859][01716] Avg episode reward: [(0, '16.470')]
[2023-02-22 20:10:36,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2945024. Throughput: 0: 1002.2. Samples: 736260. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:10:36,863][01716] Avg episode reward: [(0, '16.476')]
[2023-02-22 20:10:37,685][12927] Updated weights for policy 0, policy_version 720 (0.0012)
[2023-02-22 20:10:41,857][01716] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2961408. Throughput: 0: 1004.1. Samples: 739820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:10:41,859][01716] Avg episode reward: [(0, '17.254')]
[2023-02-22 20:10:46,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2977792. Throughput: 0: 955.5. Samples: 744396. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:10:46,869][01716] Avg episode reward: [(0, '17.732')]
[2023-02-22 20:10:49,723][12927] Updated weights for policy 0, policy_version 730 (0.0021)
[2023-02-22 20:10:51,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2998272. Throughput: 0: 983.3. Samples: 750158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:10:51,859][01716] Avg episode reward: [(0, '17.798')]
[2023-02-22 20:10:56,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3022848. Throughput: 0: 1015.4. Samples: 753844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:10:56,859][01716] Avg episode reward: [(0, '18.659')]
[2023-02-22 20:10:58,130][12927] Updated weights for policy 0, policy_version 740 (0.0013)
[2023-02-22 20:11:01,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3846.1). Total num frames: 3043328. Throughput: 0: 1002.6. Samples: 760428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:11:01,859][01716] Avg episode reward: [(0, '19.253')]
[2023-02-22 20:11:01,865][12913] Saving new best policy, reward=19.253!
[2023-02-22 20:11:06,857][01716] Fps is (10 sec: 3276.7, 60 sec: 3891.3, 300 sec: 3846.1). Total num frames: 3055616. Throughput: 0: 961.5. Samples: 764932. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:11:06,864][01716] Avg episode reward: [(0, '19.389')]
[2023-02-22 20:11:06,916][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000747_3059712.pth...
[2023-02-22 20:11:07,079][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000523_2142208.pth
[2023-02-22 20:11:07,095][12913] Saving new best policy, reward=19.389!
[2023-02-22 20:11:10,178][12927] Updated weights for policy 0, policy_version 750 (0.0017)
[2023-02-22 20:11:11,857][01716] Fps is (10 sec: 3686.2, 60 sec: 4028.0, 300 sec: 3846.1). Total num frames: 3080192. Throughput: 0: 966.8. Samples: 767466. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:11:11,860][01716] Avg episode reward: [(0, '20.659')]
[2023-02-22 20:11:11,866][12913] Saving new best policy, reward=20.659!
[2023-02-22 20:11:16,857][01716] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3100672. Throughput: 0: 1015.7. Samples: 774708. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:11:16,864][01716] Avg episode reward: [(0, '21.728')]
[2023-02-22 20:11:16,876][12913] Saving new best policy, reward=21.728!
[2023-02-22 20:11:18,971][12927] Updated weights for policy 0, policy_version 760 (0.0022)
[2023-02-22 20:11:21,857][01716] Fps is (10 sec: 4096.2, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3121152. Throughput: 0: 987.4. Samples: 780692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:11:21,866][01716] Avg episode reward: [(0, '20.377')]
[2023-02-22 20:11:26,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3133440. Throughput: 0: 954.4. Samples: 782766. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-22 20:11:26,861][01716] Avg episode reward: [(0, '20.165')]
[2023-02-22 20:11:31,024][12927] Updated weights for policy 0, policy_version 770 (0.0020)
[2023-02-22 20:11:31,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3153920. Throughput: 0: 972.8. Samples: 788174. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:11:31,859][01716] Avg episode reward: [(0, '19.948')]
[2023-02-22 20:11:36,857][01716] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3178496. Throughput: 0: 1006.4. Samples: 795448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:11:36,860][01716] Avg episode reward: [(0, '19.996')]
[2023-02-22 20:11:40,458][12927] Updated weights for policy 0, policy_version 780 (0.0013)
[2023-02-22 20:11:41,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3194880. Throughput: 0: 994.3. Samples: 798586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:11:41,862][01716] Avg episode reward: [(0, '20.948')]
[2023-02-22 20:11:46,857][01716] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3211264. Throughput: 0: 944.7. Samples: 802940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:11:46,865][01716] Avg episode reward: [(0, '21.494')]
[2023-02-22 20:11:51,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3231744. Throughput: 0: 976.0. Samples: 808852. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:11:51,859][01716] Avg episode reward: [(0, '21.399')]
[2023-02-22 20:11:51,986][12927] Updated weights for policy 0, policy_version 790 (0.0018)
[2023-02-22 20:11:56,857][01716] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3256320. Throughput: 0: 999.1. Samples: 812424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-22 20:11:56,864][01716] Avg episode reward: [(0, '19.957')]
[2023-02-22 20:12:01,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3272704. Throughput: 0: 978.2. Samples: 818728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:12:01,864][01716] Avg episode reward: [(0, '20.374')]
[2023-02-22 20:12:01,893][12927] Updated weights for policy 0, policy_version 800 (0.0016)
[2023-02-22 20:12:06,858][01716] Fps is (10 sec: 3276.4, 60 sec: 3891.1, 300 sec: 3832.2). Total num frames: 3289088. Throughput: 0: 941.1. Samples: 823042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:12:06,862][01716] Avg episode reward: [(0, '19.927')]
[2023-02-22 20:12:11,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3818.3). Total num frames: 3309568. Throughput: 0: 955.3. Samples: 825754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:12:11,863][01716] Avg episode reward: [(0, '20.023')]
[2023-02-22 20:12:13,163][12927] Updated weights for policy 0, policy_version 810 (0.0023)
[2023-02-22 20:12:16,857][01716] Fps is (10 sec: 4506.1, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3334144. Throughput: 0: 993.1. Samples: 832864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:12:16,865][01716] Avg episode reward: [(0, '20.168')]
[2023-02-22 20:12:21,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3350528. Throughput: 0: 957.3. Samples: 838526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:12:21,867][01716] Avg episode reward: [(0, '20.196')]
[2023-02-22 20:12:24,001][12927] Updated weights for policy 0, policy_version 820 (0.0018)
[2023-02-22 20:12:26,857][01716] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3366912. Throughput: 0: 936.0. Samples: 840704. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-22 20:12:26,863][01716] Avg episode reward: [(0, '19.176')]
[2023-02-22 20:12:31,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3387392. Throughput: 0: 971.1. Samples: 846640. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:12:31,866][01716] Avg episode reward: [(0, '19.074')]
[2023-02-22 20:12:33,646][12927] Updated weights for policy 0, policy_version 830 (0.0030)
[2023-02-22 20:12:36,857][01716] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3411968. Throughput: 0: 1001.6. Samples: 853922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:12:36,865][01716] Avg episode reward: [(0, '19.068')]
[2023-02-22 20:12:41,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3428352. Throughput: 0: 985.0. Samples: 856750. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:12:41,859][01716] Avg episode reward: [(0, '19.297')]
[2023-02-22 20:12:45,557][12927] Updated weights for policy 0, policy_version 840 (0.0042)
[2023-02-22 20:12:46,857][01716] Fps is (10 sec: 2867.2, 60 sec: 3823.0, 300 sec: 3818.3). Total num frames: 3440640. Throughput: 0: 930.2. Samples: 860586. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:12:46,860][01716] Avg episode reward: [(0, '18.783')]
[2023-02-22 20:12:51,857][01716] Fps is (10 sec: 2457.6, 60 sec: 3686.4, 300 sec: 3776.7). Total num frames: 3452928. Throughput: 0: 917.6. Samples: 864332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:12:51,865][01716] Avg episode reward: [(0, '19.141')]
[2023-02-22 20:12:56,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 3477504. Throughput: 0: 917.3. Samples: 867032. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:12:56,864][01716] Avg episode reward: [(0, '19.310')]
[2023-02-22 20:12:57,738][12927] Updated weights for policy 0, policy_version 850 (0.0016)
[2023-02-22 20:13:01,863][01716] Fps is (10 sec: 4502.7, 60 sec: 3754.3, 300 sec: 3804.3). Total num frames: 3497984. Throughput: 0: 917.7. Samples: 874168. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:13:01,870][01716] Avg episode reward: [(0, '19.233')]
[2023-02-22 20:13:06,857][01716] Fps is (10 sec: 3686.2, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3514368. Throughput: 0: 897.8. Samples: 878926. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-22 20:13:06,861][01716] Avg episode reward: [(0, '18.773')]
[2023-02-22 20:13:06,879][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000858_3514368.pth...
[2023-02-22 20:13:07,054][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000632_2588672.pth
[2023-02-22 20:13:09,421][12927] Updated weights for policy 0, policy_version 860 (0.0020)
[2023-02-22 20:13:11,857][01716] Fps is (10 sec: 3278.9, 60 sec: 3686.4, 300 sec: 3776.7). Total num frames: 3530752. Throughput: 0: 899.0. Samples: 881158. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:13:11,859][01716] Avg episode reward: [(0, '19.333')]
[2023-02-22 20:13:16,857][01716] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 3555328. Throughput: 0: 916.9. Samples: 887900. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:13:16,859][01716] Avg episode reward: [(0, '17.825')]
[2023-02-22 20:13:18,477][12927] Updated weights for policy 0, policy_version 870 (0.0012)
[2023-02-22 20:13:21,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3575808. Throughput: 0: 910.8. Samples: 894906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:13:21,861][01716] Avg episode reward: [(0, '18.711')]
[2023-02-22 20:13:26,861][01716] Fps is (10 sec: 3684.7, 60 sec: 3754.4, 300 sec: 3832.1). Total num frames: 3592192. Throughput: 0: 898.5. Samples: 897186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:13:26,867][01716] Avg episode reward: [(0, '19.718')]
[2023-02-22 20:13:30,319][12927] Updated weights for policy 0, policy_version 880 (0.0027)
[2023-02-22 20:13:31,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 3608576. Throughput: 0: 916.8. Samples: 901842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:13:31,859][01716] Avg episode reward: [(0, '19.592')]
[2023-02-22 20:13:36,857][01716] Fps is (10 sec: 4097.9, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 3633152. Throughput: 0: 996.6. Samples: 909178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:13:36,868][01716] Avg episode reward: [(0, '20.419')]
[2023-02-22 20:13:38,719][12927] Updated weights for policy 0, policy_version 890 (0.0020)
[2023-02-22 20:13:41,857][01716] Fps is (10 sec: 4915.2, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3657728. Throughput: 0: 1018.5. Samples: 912864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:13:41,861][01716] Avg episode reward: [(0, '20.015')]
[2023-02-22 20:13:46,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3670016. Throughput: 0: 974.2. Samples: 918000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:13:46,859][01716] Avg episode reward: [(0, '19.929')]
[2023-02-22 20:13:50,578][12927] Updated weights for policy 0, policy_version 900 (0.0014)
[2023-02-22 20:13:51,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3690496. Throughput: 0: 984.2. Samples: 923216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:13:51,863][01716] Avg episode reward: [(0, '20.091')]
[2023-02-22 20:13:56,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3715072. Throughput: 0: 1015.8. Samples: 926870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:13:56,862][01716] Avg episode reward: [(0, '20.254')]
[2023-02-22 20:13:59,092][12927] Updated weights for policy 0, policy_version 910 (0.0019)
[2023-02-22 20:14:01,859][01716] Fps is (10 sec: 4504.4, 60 sec: 3959.7, 300 sec: 3887.7). Total num frames: 3735552. Throughput: 0: 1022.9. Samples: 933932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:14:01,863][01716] Avg episode reward: [(0, '20.251')]
[2023-02-22 20:14:06,859][01716] Fps is (10 sec: 3685.7, 60 sec: 3959.4, 300 sec: 3887.7). Total num frames: 3751936. Throughput: 0: 964.2. Samples: 938298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:14:06,861][01716] Avg episode reward: [(0, '20.457')]
[2023-02-22 20:14:11,414][12927] Updated weights for policy 0, policy_version 920 (0.0027)
[2023-02-22 20:14:11,857][01716] Fps is (10 sec: 3277.7, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3768320. Throughput: 0: 961.7. Samples: 940456. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:14:11,859][01716] Avg episode reward: [(0, '20.969')]
[2023-02-22 20:14:16,857][01716] Fps is (10 sec: 4096.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3792896. Throughput: 0: 1014.0. Samples: 947474. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-22 20:14:16,863][01716] Avg episode reward: [(0, '20.675')]
[2023-02-22 20:14:20,415][12927] Updated weights for policy 0, policy_version 930 (0.0018)
[2023-02-22 20:14:21,863][01716] Fps is (10 sec: 4093.3, 60 sec: 3890.8, 300 sec: 3873.8). Total num frames: 3809280. Throughput: 0: 988.7. Samples: 953674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:14:21,866][01716] Avg episode reward: [(0, '20.193')]
[2023-02-22 20:14:26,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3891.5, 300 sec: 3873.8). Total num frames: 3825664. Throughput: 0: 955.2. Samples: 955848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:14:26,860][01716] Avg episode reward: [(0, '21.470')]
[2023-02-22 20:14:31,857][01716] Fps is (10 sec: 3688.8, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3846144. Throughput: 0: 947.3. Samples: 960628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:14:31,864][01716] Avg episode reward: [(0, '21.168')]
[2023-02-22 20:14:32,662][12927] Updated weights for policy 0, policy_version 940 (0.0036)
[2023-02-22 20:14:36,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3866624. Throughput: 0: 980.8. Samples: 967352. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-22 20:14:36,862][01716] Avg episode reward: [(0, '20.642')]
[2023-02-22 20:14:41,858][01716] Fps is (10 sec: 4095.3, 60 sec: 3822.8, 300 sec: 3873.8). Total num frames: 3887104. Throughput: 0: 975.7. Samples: 970776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:14:41,860][01716] Avg episode reward: [(0, '20.629')]
[2023-02-22 20:14:43,124][12927] Updated weights for policy 0, policy_version 950 (0.0017)
[2023-02-22 20:14:46,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3899392. Throughput: 0: 915.7. Samples: 975134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:14:46,868][01716] Avg episode reward: [(0, '21.409')]
[2023-02-22 20:14:51,857][01716] Fps is (10 sec: 3277.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3919872. Throughput: 0: 940.1. Samples: 980600. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:14:51,864][01716] Avg episode reward: [(0, '20.421')]
[2023-02-22 20:14:53,975][12927] Updated weights for policy 0, policy_version 960 (0.0015)
[2023-02-22 20:14:56,857][01716] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3944448. Throughput: 0: 969.5. Samples: 984082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:14:56,859][01716] Avg episode reward: [(0, '20.327')]
[2023-02-22 20:15:01,857][01716] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3860.0). Total num frames: 3960832. Throughput: 0: 959.2. Samples: 990636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-22 20:15:01,860][01716] Avg episode reward: [(0, '20.431')]
[2023-02-22 20:15:04,671][12927] Updated weights for policy 0, policy_version 970 (0.0015)
[2023-02-22 20:15:06,857][01716] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3860.0). Total num frames: 3977216. Throughput: 0: 919.7. Samples: 995054. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-22 20:15:06,863][01716] Avg episode reward: [(0, '21.731')]
[2023-02-22 20:15:06,873][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000971_3977216.pth...
[2023-02-22 20:15:07,006][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000747_3059712.pth
[2023-02-22 20:15:07,021][12913] Saving new best policy, reward=21.731!
[2023-02-22 20:15:11,857][01716] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3997696. Throughput: 0: 924.8. Samples: 997462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-22 20:15:11,864][01716] Avg episode reward: [(0, '21.715')]
[2023-02-22 20:15:13,321][12913] Stopping Batcher_0...
[2023-02-22 20:15:13,321][12913] Loop batcher_evt_loop terminating...
[2023-02-22 20:15:13,322][01716] Component Batcher_0 stopped!
[2023-02-22 20:15:13,326][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-22 20:15:13,370][12927] Weights refcount: 2 0
[2023-02-22 20:15:13,378][12927] Stopping InferenceWorker_p0-w0...
[2023-02-22 20:15:13,379][01716] Component InferenceWorker_p0-w0 stopped!
[2023-02-22 20:15:13,385][12927] Loop inference_proc0-0_evt_loop terminating...
[2023-02-22 20:15:13,392][12934] Stopping RolloutWorker_w6...
[2023-02-22 20:15:13,392][01716] Component RolloutWorker_w6 stopped!
[2023-02-22 20:15:13,401][12932] Stopping RolloutWorker_w4...
[2023-02-22 20:15:13,401][01716] Component RolloutWorker_w7 stopped!
[2023-02-22 20:15:13,405][01716] Component RolloutWorker_w4 stopped!
[2023-02-22 20:15:13,393][12934] Loop rollout_proc6_evt_loop terminating...
[2023-02-22 20:15:13,413][12933] Stopping RolloutWorker_w5...
[2023-02-22 20:15:13,414][12933] Loop rollout_proc5_evt_loop terminating...
[2023-02-22 20:15:13,402][12932] Loop rollout_proc4_evt_loop terminating...
[2023-02-22 20:15:13,412][01716] Component RolloutWorker_w5 stopped!
[2023-02-22 20:15:13,427][01716] Component RolloutWorker_w0 stopped!
[2023-02-22 20:15:13,418][12935] Stopping RolloutWorker_w7...
[2023-02-22 20:15:13,433][12935] Loop rollout_proc7_evt_loop terminating...
[2023-02-22 20:15:13,427][12928] Stopping RolloutWorker_w0...
[2023-02-22 20:15:13,436][12928] Loop rollout_proc0_evt_loop terminating...
[2023-02-22 20:15:13,439][01716] Component RolloutWorker_w3 stopped!
[2023-02-22 20:15:13,442][12931] Stopping RolloutWorker_w3...
[2023-02-22 20:15:13,443][12931] Loop rollout_proc3_evt_loop terminating...
[2023-02-22 20:15:13,450][12930] Stopping RolloutWorker_w2...
[2023-02-22 20:15:13,450][01716] Component RolloutWorker_w2 stopped!
[2023-02-22 20:15:13,451][12930] Loop rollout_proc2_evt_loop terminating...
[2023-02-22 20:15:13,460][01716] Component RolloutWorker_w1 stopped!
[2023-02-22 20:15:13,464][12929] Stopping RolloutWorker_w1...
[2023-02-22 20:15:13,464][12929] Loop rollout_proc1_evt_loop terminating...
[2023-02-22 20:15:13,508][12913] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000858_3514368.pth
[2023-02-22 20:15:13,535][12913] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-22 20:15:13,784][01716] Component LearnerWorker_p0 stopped!
[2023-02-22 20:15:13,787][01716] Waiting for process learner_proc0 to stop...
[2023-02-22 20:15:13,793][12913] Stopping LearnerWorker_p0...
[2023-02-22 20:15:13,796][12913] Loop learner_proc0_evt_loop terminating...
[2023-02-22 20:15:15,566][01716] Waiting for process inference_proc0-0 to join...
[2023-02-22 20:15:15,921][01716] Waiting for process rollout_proc0 to join...
[2023-02-22 20:15:15,923][01716] Waiting for process rollout_proc1 to join...
[2023-02-22 20:15:16,284][01716] Waiting for process rollout_proc2 to join...
[2023-02-22 20:15:16,287][01716] Waiting for process rollout_proc3 to join...
[2023-02-22 20:15:16,288][01716] Waiting for process rollout_proc4 to join...
[2023-02-22 20:15:16,289][01716] Waiting for process rollout_proc5 to join...
[2023-02-22 20:15:16,294][01716] Waiting for process rollout_proc6 to join...
[2023-02-22 20:15:16,297][01716] Waiting for process rollout_proc7 to join...
[2023-02-22 20:15:16,299][01716] Batcher 0 profile tree view:
batching: 25.4083, releasing_batches: 0.0234
[2023-02-22 20:15:16,300][01716] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0023
wait_policy_total: 518.0777
update_model: 7.4290
weight_update: 0.0040
one_step: 0.0088
handle_policy_step: 493.7652
deserialize: 14.5493, stack: 2.7287, obs_to_device_normalize: 111.6272, forward: 234.4924, send_messages: 25.8665
prepare_outputs: 79.9748
to_cpu: 50.5944
[2023-02-22 20:15:16,301][01716] Learner 0 profile tree view:
misc: 0.0059, prepare_batch: 16.5361
train: 74.9717
epoch_init: 0.0054, minibatch_init: 0.0063, losses_postprocess: 0.6532, kl_divergence: 0.5577, after_optimizer: 32.8695
calculate_losses: 26.6181
losses_init: 0.0033, forward_head: 1.8122, bptt_initial: 17.6187, tail: 1.0513, advantages_returns: 0.2655, losses: 3.4581
bptt: 2.0981
bptt_forward_core: 2.0337
update: 13.6653
clip: 1.3475
[2023-02-22 20:15:16,302][01716] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.3625, enqueue_policy_requests: 135.9894, env_step: 801.8848, overhead: 19.2748, complete_rollouts: 6.8519
save_policy_outputs: 18.9574
split_output_tensors: 9.1797
[2023-02-22 20:15:16,304][01716] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.3424, enqueue_policy_requests: 139.1966, env_step: 797.9545, overhead: 19.7604, complete_rollouts: 6.7238
save_policy_outputs: 18.8366
split_output_tensors: 9.2421
[2023-02-22 20:15:16,305][01716] Loop Runner_EvtLoop terminating...
[2023-02-22 20:15:16,307][01716] Runner profile tree view:
main_loop: 1084.7474
[2023-02-22 20:15:16,308][01716] Collected {0: 4005888}, FPS: 3692.9
[2023-02-22 20:15:40,996][01716] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-22 20:15:41,004][01716] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-22 20:15:41,006][01716] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-22 20:15:41,008][01716] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-22 20:15:41,010][01716] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-22 20:15:41,014][01716] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-22 20:15:41,016][01716] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-02-22 20:15:41,019][01716] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-22 20:15:41,020][01716] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-02-22 20:15:41,021][01716] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-02-22 20:15:41,022][01716] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-22 20:15:41,023][01716] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-22 20:15:41,038][01716] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-22 20:15:41,039][01716] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-22 20:15:41,041][01716] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-22 20:15:41,073][01716] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-22 20:15:41,077][01716] RunningMeanStd input shape: (3, 72, 128)
[2023-02-22 20:15:41,081][01716] RunningMeanStd input shape: (1,)
[2023-02-22 20:15:41,102][01716] ConvEncoder: input_channels=3
[2023-02-22 20:15:41,711][01716] Conv encoder output size: 512
[2023-02-22 20:15:41,713][01716] Policy head output size: 512
[2023-02-22 20:15:44,207][01716] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-22 20:15:45,997][01716] Num frames 100...
[2023-02-22 20:15:46,174][01716] Num frames 200...
[2023-02-22 20:15:46,337][01716] Num frames 300...
[2023-02-22 20:15:46,504][01716] Num frames 400...
[2023-02-22 20:15:46,664][01716] Num frames 500...
[2023-02-22 20:15:46,827][01716] Num frames 600...
[2023-02-22 20:15:46,986][01716] Num frames 700...
[2023-02-22 20:15:47,105][01716] Avg episode rewards: #0: 16.360, true rewards: #0: 7.360
[2023-02-22 20:15:47,107][01716] Avg episode reward: 16.360, avg true_objective: 7.360
[2023-02-22 20:15:47,210][01716] Num frames 800...
[2023-02-22 20:15:47,368][01716] Num frames 900...
[2023-02-22 20:15:47,482][01716] Num frames 1000...
[2023-02-22 20:15:47,591][01716] Num frames 1100...
[2023-02-22 20:15:47,700][01716] Num frames 1200...
[2023-02-22 20:15:47,824][01716] Num frames 1300...
[2023-02-22 20:15:47,936][01716] Num frames 1400...
[2023-02-22 20:15:48,036][01716] Avg episode rewards: #0: 13.700, true rewards: #0: 7.200
[2023-02-22 20:15:48,038][01716] Avg episode reward: 13.700, avg true_objective: 7.200
[2023-02-22 20:15:48,106][01716] Num frames 1500...
[2023-02-22 20:15:48,217][01716] Num frames 1600...
[2023-02-22 20:15:48,326][01716] Num frames 1700...
[2023-02-22 20:15:48,441][01716] Num frames 1800...
[2023-02-22 20:15:48,563][01716] Avg episode rewards: #0: 11.533, true rewards: #0: 6.200
[2023-02-22 20:15:48,566][01716] Avg episode reward: 11.533, avg true_objective: 6.200
[2023-02-22 20:15:48,614][01716] Num frames 1900...
[2023-02-22 20:15:48,726][01716] Num frames 2000...
[2023-02-22 20:15:48,846][01716] Num frames 2100...
[2023-02-22 20:15:48,957][01716] Num frames 2200...
[2023-02-22 20:15:49,070][01716] Num frames 2300...
[2023-02-22 20:15:49,183][01716] Num frames 2400...
[2023-02-22 20:15:49,297][01716] Num frames 2500...
[2023-02-22 20:15:49,417][01716] Num frames 2600...
[2023-02-22 20:15:49,530][01716] Num frames 2700...
[2023-02-22 20:15:49,682][01716] Avg episode rewards: #0: 13.720, true rewards: #0: 6.970
[2023-02-22 20:15:49,684][01716] Avg episode reward: 13.720, avg true_objective: 6.970
[2023-02-22 20:15:49,703][01716] Num frames 2800...
[2023-02-22 20:15:49,821][01716] Num frames 2900...
[2023-02-22 20:15:49,931][01716] Num frames 3000...
[2023-02-22 20:15:50,044][01716] Num frames 3100...
[2023-02-22 20:15:50,162][01716] Num frames 3200...
[2023-02-22 20:15:50,270][01716] Num frames 3300...
[2023-02-22 20:15:50,378][01716] Num frames 3400...
[2023-02-22 20:15:50,489][01716] Num frames 3500...
[2023-02-22 20:15:50,603][01716] Num frames 3600...
[2023-02-22 20:15:50,723][01716] Num frames 3700...
[2023-02-22 20:15:50,775][01716] Avg episode rewards: #0: 14.800, true rewards: #0: 7.400
[2023-02-22 20:15:50,778][01716] Avg episode reward: 14.800, avg true_objective: 7.400
[2023-02-22 20:15:50,890][01716] Num frames 3800...
[2023-02-22 20:15:51,003][01716] Num frames 3900...
[2023-02-22 20:15:51,111][01716] Num frames 4000...
[2023-02-22 20:15:51,256][01716] Avg episode rewards: #0: 12.973, true rewards: #0: 6.807
[2023-02-22 20:15:51,258][01716] Avg episode reward: 12.973, avg true_objective: 6.807
[2023-02-22 20:15:51,282][01716] Num frames 4100...
[2023-02-22 20:15:51,404][01716] Num frames 4200...
[2023-02-22 20:15:51,517][01716] Num frames 4300...
[2023-02-22 20:15:51,626][01716] Num frames 4400...
[2023-02-22 20:15:51,738][01716] Num frames 4500...
[2023-02-22 20:15:51,905][01716] Avg episode rewards: #0: 12.280, true rewards: #0: 6.566
[2023-02-22 20:15:51,908][01716] Avg episode reward: 12.280, avg true_objective: 6.566
[2023-02-22 20:15:51,917][01716] Num frames 4600...
[2023-02-22 20:15:52,028][01716] Num frames 4700...
[2023-02-22 20:15:52,154][01716] Num frames 4800...
[2023-02-22 20:15:52,268][01716] Num frames 4900...
[2023-02-22 20:15:52,378][01716] Num frames 5000...
[2023-02-22 20:15:52,489][01716] Num frames 5100...
[2023-02-22 20:15:52,599][01716] Num frames 5200...
[2023-02-22 20:15:52,660][01716] Avg episode rewards: #0: 12.005, true rewards: #0: 6.505
[2023-02-22 20:15:52,662][01716] Avg episode reward: 12.005, avg true_objective: 6.505
[2023-02-22 20:15:52,771][01716] Num frames 5300...
[2023-02-22 20:15:52,887][01716] Num frames 5400...
[2023-02-22 20:15:52,993][01716] Num frames 5500...
[2023-02-22 20:15:53,106][01716] Num frames 5600...
[2023-02-22 20:15:53,218][01716] Num frames 5700...
[2023-02-22 20:15:53,330][01716] Num frames 5800...
[2023-02-22 20:15:53,469][01716] Avg episode rewards: #0: 12.085, true rewards: #0: 6.529
[2023-02-22 20:15:53,472][01716] Avg episode reward: 12.085, avg true_objective: 6.529
[2023-02-22 20:15:53,501][01716] Num frames 5900...
[2023-02-22 20:15:53,612][01716] Num frames 6000...
[2023-02-22 20:15:53,722][01716] Num frames 6100...
[2023-02-22 20:15:53,846][01716] Num frames 6200...
[2023-02-22 20:15:53,964][01716] Num frames 6300...
[2023-02-22 20:15:54,075][01716] Num frames 6400...
[2023-02-22 20:15:54,185][01716] Num frames 6500...
[2023-02-22 20:15:54,300][01716] Num frames 6600...
[2023-02-22 20:15:54,429][01716] Avg episode rewards: #0: 12.468, true rewards: #0: 6.668
[2023-02-22 20:15:54,430][01716] Avg episode reward: 12.468, avg true_objective: 6.668
[2023-02-22 20:16:33,419][01716] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-22 20:22:41,835][01716] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-22 20:22:41,837][01716] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-22 20:22:41,839][01716] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-22 20:22:41,841][01716] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-22 20:22:41,843][01716] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-22 20:22:41,845][01716] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-22 20:22:41,846][01716] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-22 20:22:41,847][01716] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-22 20:22:41,849][01716] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-22 20:22:41,850][01716] Adding new argument 'hf_repository'='RamonAnkersmit/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-22 20:22:41,851][01716] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-22 20:22:41,852][01716] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-22 20:22:41,853][01716] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-22 20:22:41,854][01716] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-22 20:22:41,856][01716] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-22 20:22:41,884][01716] RunningMeanStd input shape: (3, 72, 128)
[2023-02-22 20:22:41,887][01716] RunningMeanStd input shape: (1,)
[2023-02-22 20:22:41,902][01716] ConvEncoder: input_channels=3
[2023-02-22 20:22:41,938][01716] Conv encoder output size: 512
[2023-02-22 20:22:41,940][01716] Policy head output size: 512
[2023-02-22 20:22:41,961][01716] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-22 20:22:42,393][01716] Num frames 100...
[2023-02-22 20:22:42,518][01716] Num frames 200...
[2023-02-22 20:22:42,639][01716] Num frames 300...
[2023-02-22 20:22:42,771][01716] Num frames 400...
[2023-02-22 20:22:42,886][01716] Num frames 500...
[2023-02-22 20:22:43,008][01716] Num frames 600...
[2023-02-22 20:22:43,126][01716] Num frames 700...
[2023-02-22 20:22:43,247][01716] Num frames 800...
[2023-02-22 20:22:43,374][01716] Avg episode rewards: #0: 14.640, true rewards: #0: 8.640
[2023-02-22 20:22:43,376][01716] Avg episode reward: 14.640, avg true_objective: 8.640
[2023-02-22 20:22:43,429][01716] Num frames 900...
[2023-02-22 20:22:43,540][01716] Num frames 1000...
[2023-02-22 20:22:43,658][01716] Num frames 1100...
[2023-02-22 20:22:43,780][01716] Num frames 1200...
[2023-02-22 20:22:43,890][01716] Avg episode rewards: #0: 9.240, true rewards: #0: 6.240
[2023-02-22 20:22:43,892][01716] Avg episode reward: 9.240, avg true_objective: 6.240
[2023-02-22 20:22:43,953][01716] Num frames 1300...
[2023-02-22 20:22:44,064][01716] Num frames 1400...
[2023-02-22 20:22:44,183][01716] Num frames 1500...
[2023-02-22 20:22:44,294][01716] Num frames 1600...
[2023-02-22 20:22:44,423][01716] Num frames 1700...
[2023-02-22 20:22:44,544][01716] Num frames 1800...
[2023-02-22 20:22:44,663][01716] Num frames 1900...
[2023-02-22 20:22:44,788][01716] Num frames 2000...
[2023-02-22 20:22:44,920][01716] Num frames 2100...
[2023-02-22 20:22:45,039][01716] Num frames 2200...
[2023-02-22 20:22:45,154][01716] Num frames 2300...
[2023-02-22 20:22:45,266][01716] Num frames 2400...
[2023-02-22 20:22:45,381][01716] Num frames 2500...
[2023-02-22 20:22:45,501][01716] Num frames 2600...
[2023-02-22 20:22:45,617][01716] Num frames 2700...
[2023-02-22 20:22:45,741][01716] Num frames 2800...
[2023-02-22 20:22:45,896][01716] Avg episode rewards: #0: 20.267, true rewards: #0: 9.600
[2023-02-22 20:22:45,898][01716] Avg episode reward: 20.267, avg true_objective: 9.600
[2023-02-22 20:22:45,926][01716] Num frames 2900...
[2023-02-22 20:22:46,043][01716] Num frames 3000...
[2023-02-22 20:22:46,156][01716] Num frames 3100...
[2023-02-22 20:22:46,277][01716] Num frames 3200...
[2023-02-22 20:22:46,388][01716] Num frames 3300...
[2023-02-22 20:22:46,505][01716] Num frames 3400...
[2023-02-22 20:22:46,619][01716] Num frames 3500...
[2023-02-22 20:22:46,741][01716] Num frames 3600...
[2023-02-22 20:22:46,818][01716] Avg episode rewards: #0: 19.040, true rewards: #0: 9.040
[2023-02-22 20:22:46,821][01716] Avg episode reward: 19.040, avg true_objective: 9.040
[2023-02-22 20:22:46,923][01716] Num frames 3700...
[2023-02-22 20:22:47,036][01716] Num frames 3800...
[2023-02-22 20:22:47,161][01716] Num frames 3900...
[2023-02-22 20:22:47,274][01716] Num frames 4000...
[2023-02-22 20:22:47,366][01716] Avg episode rewards: #0: 16.264, true rewards: #0: 8.064
[2023-02-22 20:22:47,368][01716] Avg episode reward: 16.264, avg true_objective: 8.064
[2023-02-22 20:22:47,452][01716] Num frames 4100...
[2023-02-22 20:22:47,566][01716] Num frames 4200...
[2023-02-22 20:22:47,685][01716] Num frames 4300...
[2023-02-22 20:22:47,800][01716] Num frames 4400...
[2023-02-22 20:22:47,915][01716] Num frames 4500...
[2023-02-22 20:22:48,025][01716] Num frames 4600...
[2023-02-22 20:22:48,143][01716] Num frames 4700...
[2023-02-22 20:22:48,256][01716] Num frames 4800...
[2023-02-22 20:22:48,380][01716] Num frames 4900...
[2023-02-22 20:22:48,505][01716] Avg episode rewards: #0: 17.100, true rewards: #0: 8.267
[2023-02-22 20:22:48,507][01716] Avg episode reward: 17.100, avg true_objective: 8.267
[2023-02-22 20:22:48,556][01716] Num frames 5000...
[2023-02-22 20:22:48,701][01716] Num frames 5100...
[2023-02-22 20:22:48,860][01716] Num frames 5200...
[2023-02-22 20:22:49,023][01716] Num frames 5300...
[2023-02-22 20:22:49,179][01716] Num frames 5400...
[2023-02-22 20:22:49,343][01716] Num frames 5500...
[2023-02-22 20:22:49,413][01716] Avg episode rewards: #0: 15.720, true rewards: #0: 7.863
[2023-02-22 20:22:49,417][01716] Avg episode reward: 15.720, avg true_objective: 7.863
[2023-02-22 20:22:49,572][01716] Num frames 5600...
[2023-02-22 20:22:49,730][01716] Num frames 5700...
[2023-02-22 20:22:49,889][01716] Num frames 5800...
[2023-02-22 20:22:50,049][01716] Num frames 5900...
[2023-02-22 20:22:50,222][01716] Num frames 6000...
[2023-02-22 20:22:50,386][01716] Num frames 6100...
[2023-02-22 20:22:50,554][01716] Num frames 6200...
[2023-02-22 20:22:50,716][01716] Num frames 6300...
[2023-02-22 20:22:50,880][01716] Num frames 6400...
[2023-02-22 20:22:51,054][01716] Num frames 6500...
[2023-02-22 20:22:51,222][01716] Num frames 6600...
[2023-02-22 20:22:51,369][01716] Avg episode rewards: #0: 16.570, true rewards: #0: 8.320
[2023-02-22 20:22:51,371][01716] Avg episode reward: 16.570, avg true_objective: 8.320
[2023-02-22 20:22:51,445][01716] Num frames 6700...
[2023-02-22 20:22:51,605][01716] Num frames 6800...
[2023-02-22 20:22:51,773][01716] Num frames 6900...
[2023-02-22 20:22:51,934][01716] Num frames 7000...
[2023-02-22 20:22:52,102][01716] Num frames 7100...
[2023-02-22 20:22:52,274][01716] Avg episode rewards: #0: 15.631, true rewards: #0: 7.964
[2023-02-22 20:22:52,276][01716] Avg episode reward: 15.631, avg true_objective: 7.964
[2023-02-22 20:22:52,316][01716] Num frames 7200...
[2023-02-22 20:22:52,437][01716] Num frames 7300...
[2023-02-22 20:22:52,550][01716] Num frames 7400...
[2023-02-22 20:22:52,663][01716] Num frames 7500...
[2023-02-22 20:22:52,776][01716] Num frames 7600...
[2023-02-22 20:22:52,900][01716] Num frames 7700...
[2023-02-22 20:22:53,041][01716] Avg episode rewards: #0: 14.976, true rewards: #0: 7.776
[2023-02-22 20:22:53,043][01716] Avg episode reward: 14.976, avg true_objective: 7.776
[2023-02-22 20:23:40,237][01716] Replay video saved to /content/train_dir/default_experiment/replay.mp4!