|
[2023-08-05 17:15:15,192][00231] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2023-08-05 17:15:15,196][00231] Rollout worker 0 uses device cpu |
|
[2023-08-05 17:15:15,199][00231] Rollout worker 1 uses device cpu |
|
[2023-08-05 17:15:15,201][00231] Rollout worker 2 uses device cpu |
|
[2023-08-05 17:15:15,202][00231] Rollout worker 3 uses device cpu |
|
[2023-08-05 17:15:15,203][00231] Rollout worker 4 uses device cpu |
|
[2023-08-05 17:15:15,204][00231] Rollout worker 5 uses device cpu |
|
[2023-08-05 17:15:15,204][00231] Rollout worker 6 uses device cpu |
|
[2023-08-05 17:15:15,205][00231] Rollout worker 7 uses device cpu |
|
[2023-08-05 17:15:15,342][00231] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-05 17:15:15,344][00231] InferenceWorker_p0-w0: min num requests: 2 |
|
[2023-08-05 17:15:15,376][00231] Starting all processes... |
|
[2023-08-05 17:15:15,378][00231] Starting process learner_proc0 |
|
[2023-08-05 17:15:15,427][00231] Starting all processes... |
|
[2023-08-05 17:15:15,436][00231] Starting process inference_proc0-0 |
|
[2023-08-05 17:15:15,436][00231] Starting process rollout_proc0 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc1 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc2 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc3 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc4 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc5 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc6 |
|
[2023-08-05 17:15:15,438][00231] Starting process rollout_proc7 |
|
[2023-08-05 17:15:31,224][17553] Worker 7 uses CPU cores [1] |
|
[2023-08-05 17:15:31,621][17552] Worker 6 uses CPU cores [0] |
|
[2023-08-05 17:15:31,644][17532] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-05 17:15:31,644][17532] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2023-08-05 17:15:31,659][17551] Worker 5 uses CPU cores [1] |
|
[2023-08-05 17:15:31,663][17532] Num visible devices: 1 |
|
[2023-08-05 17:15:31,685][17547] Worker 1 uses CPU cores [1] |
|
[2023-08-05 17:15:31,688][17532] Starting seed is not provided |
|
[2023-08-05 17:15:31,689][17532] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-05 17:15:31,689][17532] Initializing actor-critic model on device cuda:0 |
|
[2023-08-05 17:15:31,690][17532] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-05 17:15:31,693][17532] RunningMeanStd input shape: (1,) |
|
[2023-08-05 17:15:31,713][17549] Worker 3 uses CPU cores [1] |
|
[2023-08-05 17:15:31,714][17545] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-05 17:15:31,714][17545] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2023-08-05 17:15:31,723][17550] Worker 4 uses CPU cores [0] |
|
[2023-08-05 17:15:31,727][17532] ConvEncoder: input_channels=3 |
|
[2023-08-05 17:15:31,738][17546] Worker 0 uses CPU cores [0] |
|
[2023-08-05 17:15:31,752][17545] Num visible devices: 1 |
|
[2023-08-05 17:15:31,775][17548] Worker 2 uses CPU cores [0] |
|
[2023-08-05 17:15:31,982][17532] Conv encoder output size: 512 |
|
[2023-08-05 17:15:31,982][17532] Policy head output size: 512 |
|
[2023-08-05 17:15:32,028][17532] Created Actor Critic model with architecture: |
|
[2023-08-05 17:15:32,029][17532] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=4, bias=True) |
|
) |
|
) |
|
[2023-08-05 17:15:35,339][00231] Heartbeat connected on Batcher_0 |
|
[2023-08-05 17:15:35,346][00231] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2023-08-05 17:15:35,353][00231] Heartbeat connected on RolloutWorker_w0 |
|
[2023-08-05 17:15:35,355][00231] Heartbeat connected on RolloutWorker_w1 |
|
[2023-08-05 17:15:35,359][00231] Heartbeat connected on RolloutWorker_w2 |
|
[2023-08-05 17:15:35,362][00231] Heartbeat connected on RolloutWorker_w3 |
|
[2023-08-05 17:15:35,365][00231] Heartbeat connected on RolloutWorker_w4 |
|
[2023-08-05 17:15:35,370][00231] Heartbeat connected on RolloutWorker_w5 |
|
[2023-08-05 17:15:35,375][00231] Heartbeat connected on RolloutWorker_w6 |
|
[2023-08-05 17:15:35,380][00231] Heartbeat connected on RolloutWorker_w7 |
|
[2023-08-05 17:15:39,984][17532] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2023-08-05 17:15:39,985][17532] No checkpoints found |
|
[2023-08-05 17:15:39,986][17532] Did not load from checkpoint, starting from scratch! |
|
[2023-08-05 17:15:39,986][17532] Initialized policy 0 weights for model version 0 |
|
[2023-08-05 17:15:39,993][17532] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-05 17:15:40,007][17532] LearnerWorker_p0 finished initialization! |
|
[2023-08-05 17:15:40,019][00231] Heartbeat connected on LearnerWorker_p0 |
|
[2023-08-05 17:15:40,095][17545] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-05 17:15:40,096][17545] RunningMeanStd input shape: (1,) |
|
[2023-08-05 17:15:40,108][17545] ConvEncoder: input_channels=3 |
|
[2023-08-05 17:15:40,208][17545] Conv encoder output size: 512 |
|
[2023-08-05 17:15:40,209][17545] Policy head output size: 512 |
|
[2023-08-05 17:15:40,317][00231] Inference worker 0-0 is ready! |
|
[2023-08-05 17:15:40,320][00231] All inference workers are ready! Signal rollout workers to start! |
|
[2023-08-05 17:15:40,606][17552] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,613][17548] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,612][17546] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,624][17550] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,633][17551] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,642][17547] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,641][17553] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:40,657][17549] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:15:41,030][17547] VizDoom game.init() threw an exception ViZDoomUnexpectedExitException('Controlled ViZDoom instance exited unexpectedly.'). Terminate process... |
|
[2023-08-05 17:15:41,032][17547] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init |
|
self.game.init() |
|
vizdoom.vizdoom.ViZDoomUnexpectedExitException: Controlled ViZDoom instance exited unexpectedly. |
|
|
|
During handling of the above exception, another exception occurred: |
|
|
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init |
|
env_runner.init(self.timing) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init |
|
self._reset() |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset |
|
observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 453, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 453, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 501, in reset |
|
obs, info = self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 114, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 82, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 453, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset |
|
self._ensure_initialized() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized |
|
self.initialize() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize |
|
self._game_init() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init |
|
raise EnvCriticalError() |
|
sample_factory.envs.env_utils.EnvCriticalError |
|
[2023-08-05 17:15:41,048][17547] Unhandled exception in evt loop rollout_proc1_evt_loop |
|
[2023-08-05 17:15:42,140][17551] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:42,142][17553] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:42,320][17552] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:42,323][17550] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:42,327][17548] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:42,330][17546] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:43,160][00231] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-05 17:15:43,449][17551] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:43,455][17553] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:43,481][17550] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:43,491][17546] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:43,495][17548] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:43,716][17549] Decorrelating experience for 0 frames... |
|
[2023-08-05 17:15:44,639][17551] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:44,650][17553] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:45,017][17552] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:45,370][17553] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:45,569][17546] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:45,572][17548] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:45,574][17550] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:45,809][17551] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:47,615][17549] Decorrelating experience for 32 frames... |
|
[2023-08-05 17:15:47,991][17552] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:48,137][17546] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:48,140][17548] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:48,145][17550] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:48,160][00231] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 2.4. Samples: 12. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-05 17:15:48,163][00231] Avg episode reward: [(0, '-0.443')] |
|
[2023-08-05 17:15:51,479][17552] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:52,958][17532] Signal inference workers to stop experience collection... |
|
[2023-08-05 17:15:52,996][17545] InferenceWorker_p0-w0: stopping experience collection |
|
[2023-08-05 17:15:53,160][00231] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 216.0. Samples: 2160. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-05 17:15:53,163][00231] Avg episode reward: [(0, '-1.286')] |
|
[2023-08-05 17:15:53,692][17549] Decorrelating experience for 64 frames... |
|
[2023-08-05 17:15:54,780][17549] Decorrelating experience for 96 frames... |
|
[2023-08-05 17:15:57,420][17532] Signal inference workers to resume experience collection... |
|
[2023-08-05 17:15:57,421][17545] InferenceWorker_p0-w0: resuming experience collection |
|
[2023-08-05 17:15:58,160][00231] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 156.1. Samples: 2342. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2023-08-05 17:15:58,170][00231] Avg episode reward: [(0, '-1.220')] |
|
[2023-08-05 17:16:03,160][00231] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 237.4. Samples: 4748. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-05 17:16:03,165][00231] Avg episode reward: [(0, '-1.479')] |
|
[2023-08-05 17:16:08,160][00231] Fps is (10 sec: 3276.8, 60 sec: 1474.6, 300 sec: 1474.6). Total num frames: 36864. Throughput: 0: 403.8. Samples: 10096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:16:08,165][00231] Avg episode reward: [(0, '-1.876')] |
|
[2023-08-05 17:16:08,380][17545] Updated weights for policy 0, policy_version 10 (0.0020) |
|
[2023-08-05 17:16:13,160][00231] Fps is (10 sec: 2867.2, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 49152. Throughput: 0: 391.5. Samples: 11746. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:16:13,164][00231] Avg episode reward: [(0, '-1.711')] |
|
[2023-08-05 17:16:18,160][00231] Fps is (10 sec: 2457.6, 60 sec: 1755.4, 300 sec: 1755.4). Total num frames: 61440. Throughput: 0: 435.8. Samples: 15252. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:16:18,163][00231] Avg episode reward: [(0, '-1.721')] |
|
[2023-08-05 17:16:23,160][00231] Fps is (10 sec: 2867.2, 60 sec: 1945.6, 300 sec: 1945.6). Total num frames: 77824. Throughput: 0: 501.5. Samples: 20062. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:16:23,162][00231] Avg episode reward: [(0, '-1.309')] |
|
[2023-08-05 17:16:23,253][17545] Updated weights for policy 0, policy_version 20 (0.0016) |
|
[2023-08-05 17:16:28,169][00231] Fps is (10 sec: 3683.0, 60 sec: 2184.1, 300 sec: 2184.1). Total num frames: 98304. Throughput: 0: 505.9. Samples: 22770. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:16:28,172][00231] Avg episode reward: [(0, '-1.524')] |
|
[2023-08-05 17:16:33,161][00231] Fps is (10 sec: 2867.1, 60 sec: 2129.9, 300 sec: 2129.9). Total num frames: 106496. Throughput: 0: 599.6. Samples: 26994. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:16:33,168][00231] Avg episode reward: [(0, '-1.463')] |
|
[2023-08-05 17:16:33,279][17532] Saving new best policy, reward=-1.463! |
|
[2023-08-05 17:16:38,160][00231] Fps is (10 sec: 2049.9, 60 sec: 2159.7, 300 sec: 2159.7). Total num frames: 118784. Throughput: 0: 627.2. Samples: 30386. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-08-05 17:16:38,168][00231] Avg episode reward: [(0, '-1.405')] |
|
[2023-08-05 17:16:38,170][17532] Saving new best policy, reward=-1.405! |
|
[2023-08-05 17:16:38,674][17545] Updated weights for policy 0, policy_version 30 (0.0031) |
|
[2023-08-05 17:16:43,160][00231] Fps is (10 sec: 2867.3, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 135168. Throughput: 0: 671.0. Samples: 32536. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:16:43,167][00231] Avg episode reward: [(0, '-1.275')] |
|
[2023-08-05 17:16:43,176][17532] Saving new best policy, reward=-1.275! |
|
[2023-08-05 17:16:48,160][00231] Fps is (10 sec: 3276.8, 60 sec: 2525.9, 300 sec: 2331.6). Total num frames: 151552. Throughput: 0: 730.6. Samples: 37624. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:16:48,163][00231] Avg episode reward: [(0, '-1.058')] |
|
[2023-08-05 17:16:48,212][17532] Saving new best policy, reward=-1.058! |
|
[2023-08-05 17:16:51,941][17545] Updated weights for policy 0, policy_version 40 (0.0019) |
|
[2023-08-05 17:16:53,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2340.6). Total num frames: 163840. Throughput: 0: 701.9. Samples: 41680. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:16:53,166][00231] Avg episode reward: [(0, '-1.131')] |
|
[2023-08-05 17:16:58,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2348.4). Total num frames: 176128. Throughput: 0: 701.3. Samples: 43306. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:16:58,168][00231] Avg episode reward: [(0, '-1.163')] |
|
[2023-08-05 17:17:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2406.4). Total num frames: 192512. Throughput: 0: 713.4. Samples: 47356. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:17:03,167][00231] Avg episode reward: [(0, '-0.777')] |
|
[2023-08-05 17:17:03,177][17532] Saving new best policy, reward=-0.777! |
|
[2023-08-05 17:17:06,480][17545] Updated weights for policy 0, policy_version 50 (0.0024) |
|
[2023-08-05 17:17:08,160][00231] Fps is (10 sec: 3276.8, 60 sec: 2867.2, 300 sec: 2457.6). Total num frames: 208896. Throughput: 0: 720.0. Samples: 52462. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:17:08,169][00231] Avg episode reward: [(0, '-0.757')] |
|
[2023-08-05 17:17:08,176][17532] Saving new best policy, reward=-0.757! |
|
[2023-08-05 17:17:13,164][00231] Fps is (10 sec: 2866.1, 60 sec: 2867.0, 300 sec: 2457.5). Total num frames: 221184. Throughput: 0: 707.8. Samples: 54616. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:17:13,170][00231] Avg episode reward: [(0, '-0.940')] |
|
[2023-08-05 17:17:13,185][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000054_221184.pth... |
|
[2023-08-05 17:17:18,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2457.6). Total num frames: 233472. Throughput: 0: 686.7. Samples: 57894. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:17:18,164][00231] Avg episode reward: [(0, '-0.780')] |
|
[2023-08-05 17:17:22,410][17545] Updated weights for policy 0, policy_version 60 (0.0033) |
|
[2023-08-05 17:17:23,160][00231] Fps is (10 sec: 2458.5, 60 sec: 2798.9, 300 sec: 2457.6). Total num frames: 245760. Throughput: 0: 702.4. Samples: 61992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:17:23,163][00231] Avg episode reward: [(0, '-0.073')] |
|
[2023-08-05 17:17:23,173][17532] Saving new best policy, reward=-0.073! |
|
[2023-08-05 17:17:28,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2731.1, 300 sec: 2496.6). Total num frames: 262144. Throughput: 0: 708.5. Samples: 64420. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:17:28,162][00231] Avg episode reward: [(0, '-0.017')] |
|
[2023-08-05 17:17:28,167][17532] Saving new best policy, reward=-0.017! |
|
[2023-08-05 17:17:33,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2799.0, 300 sec: 2494.8). Total num frames: 274432. Throughput: 0: 695.6. Samples: 68926. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-05 17:17:33,164][00231] Avg episode reward: [(0, '0.150')] |
|
[2023-08-05 17:17:33,174][17532] Saving new best policy, reward=0.150! |
|
[2023-08-05 17:17:37,419][17545] Updated weights for policy 0, policy_version 70 (0.0031) |
|
[2023-08-05 17:17:38,163][00231] Fps is (10 sec: 2456.9, 60 sec: 2798.8, 300 sec: 2493.2). Total num frames: 286720. Throughput: 0: 676.5. Samples: 72124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:17:38,167][00231] Avg episode reward: [(0, '0.150')] |
|
[2023-08-05 17:17:43,165][00231] Fps is (10 sec: 2865.9, 60 sec: 2798.7, 300 sec: 2525.8). Total num frames: 303104. Throughput: 0: 678.6. Samples: 73846. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:17:43,167][00231] Avg episode reward: [(0, '0.193')] |
|
[2023-08-05 17:17:43,178][17532] Saving new best policy, reward=0.193! |
|
[2023-08-05 17:17:48,160][00231] Fps is (10 sec: 2868.1, 60 sec: 2730.7, 300 sec: 2523.1). Total num frames: 315392. Throughput: 0: 695.9. Samples: 78672. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:17:48,163][00231] Avg episode reward: [(0, '0.379')] |
|
[2023-08-05 17:17:48,167][17532] Saving new best policy, reward=0.379! |
|
[2023-08-05 17:17:51,527][17545] Updated weights for policy 0, policy_version 80 (0.0023) |
|
[2023-08-05 17:17:53,160][00231] Fps is (10 sec: 2458.8, 60 sec: 2730.7, 300 sec: 2520.6). Total num frames: 327680. Throughput: 0: 673.6. Samples: 82772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:17:53,163][00231] Avg episode reward: [(0, '0.481')] |
|
[2023-08-05 17:17:53,172][17532] Saving new best policy, reward=0.481! |
|
[2023-08-05 17:17:58,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2518.3). Total num frames: 339968. Throughput: 0: 660.3. Samples: 84328. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:17:58,167][00231] Avg episode reward: [(0, '0.537')] |
|
[2023-08-05 17:17:58,171][17532] Saving new best policy, reward=0.537! |
|
[2023-08-05 17:18:03,161][00231] Fps is (10 sec: 2457.5, 60 sec: 2662.4, 300 sec: 2516.1). Total num frames: 352256. Throughput: 0: 659.0. Samples: 87548. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:18:03,170][00231] Avg episode reward: [(0, '0.578')] |
|
[2023-08-05 17:18:03,184][17532] Saving new best policy, reward=0.578! |
|
[2023-08-05 17:18:07,848][17545] Updated weights for policy 0, policy_version 90 (0.0016) |
|
[2023-08-05 17:18:08,160][00231] Fps is (10 sec: 2867.3, 60 sec: 2662.4, 300 sec: 2542.3). Total num frames: 368640. Throughput: 0: 671.8. Samples: 92222. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:18:08,163][00231] Avg episode reward: [(0, '0.549')] |
|
[2023-08-05 17:18:13,160][00231] Fps is (10 sec: 2867.3, 60 sec: 2662.6, 300 sec: 2539.5). Total num frames: 380928. Throughput: 0: 671.2. Samples: 94624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:18:13,167][00231] Avg episode reward: [(0, '0.599')] |
|
[2023-08-05 17:18:13,182][17532] Saving new best policy, reward=0.599! |
|
[2023-08-05 17:18:18,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2536.9). Total num frames: 393216. Throughput: 0: 639.7. Samples: 97712. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:18:18,166][00231] Avg episode reward: [(0, '0.634')] |
|
[2023-08-05 17:18:18,173][17532] Saving new best policy, reward=0.634! |
|
[2023-08-05 17:18:23,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2594.1, 300 sec: 2508.8). Total num frames: 401408. Throughput: 0: 640.6. Samples: 100948. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:18:23,163][00231] Avg episode reward: [(0, '0.597')] |
|
[2023-08-05 17:18:24,779][17545] Updated weights for policy 0, policy_version 100 (0.0023) |
|
[2023-08-05 17:18:28,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2594.1, 300 sec: 2532.1). Total num frames: 417792. Throughput: 0: 653.0. Samples: 103230. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:18:28,162][00231] Avg episode reward: [(0, '0.670')] |
|
[2023-08-05 17:18:28,168][17532] Saving new best policy, reward=0.670! |
|
[2023-08-05 17:18:33,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2594.1, 300 sec: 2529.9). Total num frames: 430080. Throughput: 0: 646.7. Samples: 107772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:18:33,167][00231] Avg episode reward: [(0, '0.656')] |
|
[2023-08-05 17:18:38,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2594.3, 300 sec: 2527.8). Total num frames: 442368. Throughput: 0: 622.2. Samples: 110770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:18:38,164][00231] Avg episode reward: [(0, '0.707')] |
|
[2023-08-05 17:18:38,171][17532] Saving new best policy, reward=0.707! |
|
[2023-08-05 17:18:41,859][17545] Updated weights for policy 0, policy_version 110 (0.0018) |
|
[2023-08-05 17:18:43,161][00231] Fps is (10 sec: 2047.9, 60 sec: 2457.8, 300 sec: 2503.1). Total num frames: 450560. Throughput: 0: 619.1. Samples: 112188. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:18:43,165][00231] Avg episode reward: [(0, '0.700')] |
|
[2023-08-05 17:18:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2525.9, 300 sec: 2524.0). Total num frames: 466944. Throughput: 0: 635.3. Samples: 116138. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:18:48,172][00231] Avg episode reward: [(0, '0.675')] |
|
[2023-08-05 17:18:53,160][00231] Fps is (10 sec: 2867.3, 60 sec: 2525.9, 300 sec: 2522.3). Total num frames: 479232. Throughput: 0: 629.9. Samples: 120568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:18:53,164][00231] Avg episode reward: [(0, '0.727')] |
|
[2023-08-05 17:18:53,175][17532] Saving new best policy, reward=0.727! |
|
[2023-08-05 17:18:57,568][17545] Updated weights for policy 0, policy_version 120 (0.0021) |
|
[2023-08-05 17:18:58,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2525.9, 300 sec: 2520.6). Total num frames: 491520. Throughput: 0: 610.0. Samples: 122076. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:18:58,165][00231] Avg episode reward: [(0, '0.737')] |
|
[2023-08-05 17:18:58,171][17532] Saving new best policy, reward=0.737! |
|
[2023-08-05 17:19:03,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2457.6, 300 sec: 2498.6). Total num frames: 499712. Throughput: 0: 605.8. Samples: 124974. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:19:03,167][00231] Avg episode reward: [(0, '0.727')] |
|
[2023-08-05 17:19:08,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2497.6). Total num frames: 512000. Throughput: 0: 621.0. Samples: 128894. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:19:08,166][00231] Avg episode reward: [(0, '0.734')] |
|
[2023-08-05 17:19:13,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2516.1). Total num frames: 528384. Throughput: 0: 619.8. Samples: 131122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:19:13,165][00231] Avg episode reward: [(0, '0.728')] |
|
[2023-08-05 17:19:13,175][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000129_528384.pth... |
|
[2023-08-05 17:19:13,977][17545] Updated weights for policy 0, policy_version 130 (0.0021) |
|
[2023-08-05 17:19:18,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2514.8). Total num frames: 540672. Throughput: 0: 599.5. Samples: 134750. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:19:18,167][00231] Avg episode reward: [(0, '0.773')] |
|
[2023-08-05 17:19:18,170][17532] Saving new best policy, reward=0.773! |
|
[2023-08-05 17:19:23,161][00231] Fps is (10 sec: 2047.8, 60 sec: 2457.6, 300 sec: 2494.8). Total num frames: 548864. Throughput: 0: 594.4. Samples: 137518. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:19:23,167][00231] Avg episode reward: [(0, '0.775')] |
|
[2023-08-05 17:19:23,190][17532] Saving new best policy, reward=0.775! |
|
[2023-08-05 17:19:28,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2494.0). Total num frames: 561152. Throughput: 0: 600.7. Samples: 139218. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:19:28,166][00231] Avg episode reward: [(0, '0.774')] |
|
[2023-08-05 17:19:31,724][17545] Updated weights for policy 0, policy_version 140 (0.0021) |
|
[2023-08-05 17:19:33,161][00231] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2511.0). Total num frames: 577536. Throughput: 0: 609.2. Samples: 143552. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:19:33,166][00231] Avg episode reward: [(0, '0.771')] |
|
[2023-08-05 17:19:38,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2492.5). Total num frames: 585728. Throughput: 0: 589.5. Samples: 147094. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:19:38,163][00231] Avg episode reward: [(0, '0.773')] |
|
[2023-08-05 17:19:43,161][00231] Fps is (10 sec: 1638.5, 60 sec: 2389.3, 300 sec: 2474.7). Total num frames: 593920. Throughput: 0: 586.6. Samples: 148472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:19:43,166][00231] Avg episode reward: [(0, '0.798')] |
|
[2023-08-05 17:19:43,180][17532] Saving new best policy, reward=0.798! |
|
[2023-08-05 17:19:48,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2474.3). Total num frames: 606208. Throughput: 0: 590.3. Samples: 151538. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:19:48,169][00231] Avg episode reward: [(0, '0.760')] |
|
[2023-08-05 17:19:49,822][17545] Updated weights for policy 0, policy_version 150 (0.0017) |
|
[2023-08-05 17:19:53,160][00231] Fps is (10 sec: 2867.3, 60 sec: 2389.3, 300 sec: 2490.4). Total num frames: 622592. Throughput: 0: 597.8. Samples: 155796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:19:53,162][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:19:53,178][17532] Saving new best policy, reward=0.802! |
|
[2023-08-05 17:19:58,162][00231] Fps is (10 sec: 2866.7, 60 sec: 2389.3, 300 sec: 2489.7). Total num frames: 634880. Throughput: 0: 596.0. Samples: 157944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:19:58,167][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:20:03,161][00231] Fps is (10 sec: 2047.8, 60 sec: 2389.3, 300 sec: 2473.3). Total num frames: 643072. Throughput: 0: 577.9. Samples: 160756. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:20:03,173][00231] Avg episode reward: [(0, '0.780')] |
|
[2023-08-05 17:20:07,991][17545] Updated weights for policy 0, policy_version 160 (0.0023) |
|
[2023-08-05 17:20:08,160][00231] Fps is (10 sec: 2048.4, 60 sec: 2389.3, 300 sec: 2473.1). Total num frames: 655360. Throughput: 0: 584.9. Samples: 163836. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:20:08,165][00231] Avg episode reward: [(0, '0.783')] |
|
[2023-08-05 17:20:13,160][00231] Fps is (10 sec: 2457.9, 60 sec: 2321.1, 300 sec: 2472.8). Total num frames: 667648. Throughput: 0: 594.8. Samples: 165986. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:20:13,167][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:20:13,177][17532] Saving new best policy, reward=0.803! |
|
[2023-08-05 17:20:18,166][00231] Fps is (10 sec: 2456.2, 60 sec: 2320.8, 300 sec: 2472.4). Total num frames: 679936. Throughput: 0: 593.0. Samples: 170242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:20:18,174][00231] Avg episode reward: [(0, '0.783')] |
|
[2023-08-05 17:20:23,167][00231] Fps is (10 sec: 2046.6, 60 sec: 2320.8, 300 sec: 2457.5). Total num frames: 688128. Throughput: 0: 576.6. Samples: 173046. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:20:23,170][00231] Avg episode reward: [(0, '0.780')] |
|
[2023-08-05 17:20:25,496][17545] Updated weights for policy 0, policy_version 170 (0.0024) |
|
[2023-08-05 17:20:28,160][00231] Fps is (10 sec: 2049.2, 60 sec: 2321.1, 300 sec: 2457.6). Total num frames: 700416. Throughput: 0: 578.8. Samples: 174520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:20:28,162][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:20:28,170][17532] Saving new best policy, reward=0.807! |
|
[2023-08-05 17:20:33,160][00231] Fps is (10 sec: 2869.2, 60 sec: 2321.1, 300 sec: 2471.7). Total num frames: 716800. Throughput: 0: 596.2. Samples: 178366. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:20:33,168][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:20:33,178][17532] Saving new best policy, reward=0.816! |
|
[2023-08-05 17:20:38,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2471.5). Total num frames: 729088. Throughput: 0: 594.0. Samples: 182524. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:20:38,167][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:20:41,713][17545] Updated weights for policy 0, policy_version 180 (0.0013) |
|
[2023-08-05 17:20:43,165][00231] Fps is (10 sec: 2047.0, 60 sec: 2389.2, 300 sec: 2499.2). Total num frames: 737280. Throughput: 0: 576.9. Samples: 183908. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-08-05 17:20:43,174][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:20:48,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 2527.0). Total num frames: 745472. Throughput: 0: 579.0. Samples: 186812. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:20:48,163][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:20:53,160][00231] Fps is (10 sec: 2458.8, 60 sec: 2321.1, 300 sec: 2568.7). Total num frames: 761856. Throughput: 0: 595.0. Samples: 190610. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:20:53,163][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:20:58,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2554.8). Total num frames: 774144. Throughput: 0: 595.2. Samples: 192768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:20:58,163][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:20:58,711][17545] Updated weights for policy 0, policy_version 190 (0.0035) |
|
[2023-08-05 17:21:03,162][00231] Fps is (10 sec: 2047.6, 60 sec: 2321.0, 300 sec: 2527.0). Total num frames: 782336. Throughput: 0: 576.3. Samples: 196172. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:21:03,167][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:21:08,162][00231] Fps is (10 sec: 2047.7, 60 sec: 2321.0, 300 sec: 2527.0). Total num frames: 794624. Throughput: 0: 578.6. Samples: 199082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:21:08,166][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:21:13,160][00231] Fps is (10 sec: 2458.1, 60 sec: 2321.1, 300 sec: 2527.0). Total num frames: 806912. Throughput: 0: 587.7. Samples: 200966. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:21:13,166][00231] Avg episode reward: [(0, '0.792')] |
|
[2023-08-05 17:21:13,179][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000197_806912.pth... |
|
[2023-08-05 17:21:13,319][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000054_221184.pth |
|
[2023-08-05 17:21:16,627][17545] Updated weights for policy 0, policy_version 200 (0.0017) |
|
[2023-08-05 17:21:18,160][00231] Fps is (10 sec: 2867.7, 60 sec: 2389.6, 300 sec: 2527.0). Total num frames: 823296. Throughput: 0: 596.0. Samples: 205186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-05 17:21:18,165][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:21:23,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.6, 300 sec: 2485.4). Total num frames: 831488. Throughput: 0: 577.7. Samples: 208522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:21:23,163][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:21:28,162][00231] Fps is (10 sec: 1638.1, 60 sec: 2321.0, 300 sec: 2485.4). Total num frames: 839680. Throughput: 0: 579.7. Samples: 209994. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:21:28,165][00231] Avg episode reward: [(0, '0.798')] |
|
[2023-08-05 17:21:33,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2499.3). Total num frames: 856064. Throughput: 0: 590.9. Samples: 213402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:21:33,167][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:21:34,249][17545] Updated weights for policy 0, policy_version 210 (0.0023) |
|
[2023-08-05 17:21:38,161][00231] Fps is (10 sec: 2867.6, 60 sec: 2321.1, 300 sec: 2485.4). Total num frames: 868352. Throughput: 0: 600.2. Samples: 217620. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:21:38,167][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:21:43,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.5, 300 sec: 2471.5). Total num frames: 880640. Throughput: 0: 594.3. Samples: 219512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:21:43,163][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:21:48,160][00231] Fps is (10 sec: 2048.1, 60 sec: 2389.3, 300 sec: 2457.6). Total num frames: 888832. Throughput: 0: 579.5. Samples: 222250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:21:48,163][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:21:52,621][17545] Updated weights for policy 0, policy_version 220 (0.0018) |
|
[2023-08-05 17:21:53,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2457.6). Total num frames: 901120. Throughput: 0: 590.6. Samples: 225658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:21:53,163][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:21:58,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2443.7). Total num frames: 913408. Throughput: 0: 595.0. Samples: 227742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:21:58,163][00231] Avg episode reward: [(0, '0.791')] |
|
[2023-08-05 17:22:03,161][00231] Fps is (10 sec: 2457.5, 60 sec: 2389.4, 300 sec: 2429.8). Total num frames: 925696. Throughput: 0: 586.9. Samples: 231596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:22:03,167][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:22:08,163][00231] Fps is (10 sec: 2047.4, 60 sec: 2321.0, 300 sec: 2416.0). Total num frames: 933888. Throughput: 0: 574.8. Samples: 234388. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:22:08,166][00231] Avg episode reward: [(0, '0.830')] |
|
[2023-08-05 17:22:08,171][17532] Saving new best policy, reward=0.830! |
|
[2023-08-05 17:22:10,867][17545] Updated weights for policy 0, policy_version 230 (0.0014) |
|
[2023-08-05 17:22:13,160][00231] Fps is (10 sec: 2048.1, 60 sec: 2321.1, 300 sec: 2415.9). Total num frames: 946176. Throughput: 0: 573.8. Samples: 235814. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:22:13,162][00231] Avg episode reward: [(0, '0.782')] |
|
[2023-08-05 17:22:18,160][00231] Fps is (10 sec: 2458.3, 60 sec: 2252.8, 300 sec: 2415.9). Total num frames: 958464. Throughput: 0: 588.1. Samples: 239866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:22:18,163][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:22:23,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2402.1). Total num frames: 970752. Throughput: 0: 577.6. Samples: 243612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:22:23,169][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:22:28,088][17545] Updated weights for policy 0, policy_version 240 (0.0024) |
|
[2023-08-05 17:22:28,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.4, 300 sec: 2402.1). Total num frames: 983040. Throughput: 0: 565.5. Samples: 244960. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:22:28,163][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:22:33,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2388.2). Total num frames: 991232. Throughput: 0: 568.9. Samples: 247850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:22:33,166][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:22:38,173][00231] Fps is (10 sec: 2045.5, 60 sec: 2252.4, 300 sec: 2374.2). Total num frames: 1003520. Throughput: 0: 579.2. Samples: 251728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:22:38,179][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:22:43,161][00231] Fps is (10 sec: 2047.8, 60 sec: 2184.5, 300 sec: 2360.4). Total num frames: 1011712. Throughput: 0: 556.8. Samples: 252798. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:22:43,167][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:22:48,160][00231] Fps is (10 sec: 1640.4, 60 sec: 2184.5, 300 sec: 2346.5). Total num frames: 1019904. Throughput: 0: 521.2. Samples: 255050. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:22:48,164][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:22:49,632][17545] Updated weights for policy 0, policy_version 250 (0.0031) |
|
[2023-08-05 17:22:53,160][00231] Fps is (10 sec: 1638.6, 60 sec: 2116.3, 300 sec: 2332.6). Total num frames: 1028096. Throughput: 0: 523.4. Samples: 257940. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:22:53,162][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:22:58,163][00231] Fps is (10 sec: 2456.8, 60 sec: 2184.4, 300 sec: 2346.5). Total num frames: 1044480. Throughput: 0: 537.5. Samples: 260004. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:22:58,170][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:23:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2184.6, 300 sec: 2332.6). Total num frames: 1056768. Throughput: 0: 542.0. Samples: 264258. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:23:03,166][00231] Avg episode reward: [(0, '0.817')] |
|
[2023-08-05 17:23:06,405][17545] Updated weights for policy 0, policy_version 260 (0.0028) |
|
[2023-08-05 17:23:08,160][00231] Fps is (10 sec: 2048.7, 60 sec: 2184.6, 300 sec: 2318.8). Total num frames: 1064960. Throughput: 0: 525.2. Samples: 267244. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:23:08,168][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:23:13,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2184.5, 300 sec: 2318.8). Total num frames: 1077248. Throughput: 0: 527.2. Samples: 268686. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:23:13,162][00231] Avg episode reward: [(0, '0.823')] |
|
[2023-08-05 17:23:13,182][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000263_1077248.pth... |
|
[2023-08-05 17:23:13,312][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000129_528384.pth |
|
[2023-08-05 17:23:18,163][00231] Fps is (10 sec: 2456.9, 60 sec: 2184.4, 300 sec: 2332.6). Total num frames: 1089536. Throughput: 0: 545.0. Samples: 272378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:23:18,168][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:23:22,693][17545] Updated weights for policy 0, policy_version 270 (0.0022) |
|
[2023-08-05 17:23:23,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2252.8, 300 sec: 2332.6). Total num frames: 1105920. Throughput: 0: 555.5. Samples: 276720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:23:23,166][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:23:28,164][00231] Fps is (10 sec: 2457.3, 60 sec: 2184.4, 300 sec: 2318.7). Total num frames: 1114112. Throughput: 0: 565.6. Samples: 278252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:23:28,170][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:23:33,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 1126400. Throughput: 0: 579.7. Samples: 281136. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:23:33,166][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:23:38,160][00231] Fps is (10 sec: 2458.5, 60 sec: 2253.3, 300 sec: 2332.6). Total num frames: 1138688. Throughput: 0: 599.2. Samples: 284902. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:23:38,163][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:23:40,504][17545] Updated weights for policy 0, policy_version 280 (0.0016) |
|
[2023-08-05 17:23:43,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.4, 300 sec: 2332.6). Total num frames: 1155072. Throughput: 0: 602.1. Samples: 287096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:23:43,163][00231] Avg episode reward: [(0, '0.795')] |
|
[2023-08-05 17:23:48,166][00231] Fps is (10 sec: 2456.3, 60 sec: 2389.1, 300 sec: 2318.7). Total num frames: 1163264. Throughput: 0: 588.8. Samples: 290756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:23:48,176][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:23:53,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2389.3, 300 sec: 2304.9). Total num frames: 1171456. Throughput: 0: 587.5. Samples: 293682. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-08-05 17:23:53,166][00231] Avg episode reward: [(0, '0.792')] |
|
[2023-08-05 17:23:58,164][00231] Fps is (10 sec: 2048.3, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 1183744. Throughput: 0: 587.5. Samples: 295124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:23:58,167][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:23:58,527][17545] Updated weights for policy 0, policy_version 290 (0.0016) |
|
[2023-08-05 17:24:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 1200128. Throughput: 0: 602.3. Samples: 299480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:24:03,163][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:24:08,163][00231] Fps is (10 sec: 2867.5, 60 sec: 2457.5, 300 sec: 2318.7). Total num frames: 1212416. Throughput: 0: 587.2. Samples: 303146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:24:08,167][00231] Avg episode reward: [(0, '0.827')] |
|
[2023-08-05 17:24:13,163][00231] Fps is (10 sec: 2047.3, 60 sec: 2389.2, 300 sec: 2304.8). Total num frames: 1220608. Throughput: 0: 585.3. Samples: 304592. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:24:13,167][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:24:16,091][17545] Updated weights for policy 0, policy_version 300 (0.0014) |
|
[2023-08-05 17:24:18,162][00231] Fps is (10 sec: 2048.2, 60 sec: 2389.4, 300 sec: 2318.7). Total num frames: 1232896. Throughput: 0: 595.4. Samples: 307928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:24:18,165][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:24:23,161][00231] Fps is (10 sec: 2867.8, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 1249280. Throughput: 0: 611.5. Samples: 312418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:24:23,164][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:24:28,160][00231] Fps is (10 sec: 2867.7, 60 sec: 2457.8, 300 sec: 2318.8). Total num frames: 1261568. Throughput: 0: 608.2. Samples: 314466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:24:28,167][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:24:32,359][17545] Updated weights for policy 0, policy_version 310 (0.0042) |
|
[2023-08-05 17:24:33,160][00231] Fps is (10 sec: 2048.2, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 1269760. Throughput: 0: 590.5. Samples: 317324. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:24:33,163][00231] Avg episode reward: [(0, '0.819')] |
|
[2023-08-05 17:24:38,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 1282048. Throughput: 0: 601.2. Samples: 320736. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:24:38,163][00231] Avg episode reward: [(0, '0.791')] |
|
[2023-08-05 17:24:43,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 1294336. Throughput: 0: 614.1. Samples: 322756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:24:43,163][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:24:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.5, 300 sec: 2318.8). Total num frames: 1306624. Throughput: 0: 605.8. Samples: 326742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-05 17:24:48,163][00231] Avg episode reward: [(0, '0.792')] |
|
[2023-08-05 17:24:49,259][17545] Updated weights for policy 0, policy_version 320 (0.0027) |
|
[2023-08-05 17:24:53,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2304.9). Total num frames: 1314816. Throughput: 0: 586.8. Samples: 329550. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:24:53,165][00231] Avg episode reward: [(0, '0.795')] |
|
[2023-08-05 17:24:58,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.5, 300 sec: 2318.8). Total num frames: 1327104. Throughput: 0: 586.9. Samples: 331000. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:24:58,164][00231] Avg episode reward: [(0, '0.822')] |
|
[2023-08-05 17:25:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 1343488. Throughput: 0: 602.5. Samples: 335040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:25:03,167][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:25:06,164][17545] Updated weights for policy 0, policy_version 330 (0.0020) |
|
[2023-08-05 17:25:08,166][00231] Fps is (10 sec: 2456.2, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 1351680. Throughput: 0: 589.0. Samples: 338928. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:25:08,175][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:25:13,163][00231] Fps is (10 sec: 2047.5, 60 sec: 2389.4, 300 sec: 2318.8). Total num frames: 1363968. Throughput: 0: 575.5. Samples: 340366. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:25:13,167][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:25:13,185][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000333_1363968.pth... |
|
[2023-08-05 17:25:13,349][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000197_806912.pth |
|
[2023-08-05 17:25:18,160][00231] Fps is (10 sec: 2049.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1372160. Throughput: 0: 574.8. Samples: 343188. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:25:18,166][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:25:23,160][00231] Fps is (10 sec: 2458.2, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 1388544. Throughput: 0: 590.7. Samples: 347316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:25:23,169][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:25:24,050][17545] Updated weights for policy 0, policy_version 340 (0.0025) |
|
[2023-08-05 17:25:28,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1400832. Throughput: 0: 591.6. Samples: 349378. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:25:28,167][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:25:33,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 1409024. Throughput: 0: 573.8. Samples: 352562. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:25:33,168][00231] Avg episode reward: [(0, '0.801')] |
|
[2023-08-05 17:25:38,161][00231] Fps is (10 sec: 2047.9, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1421312. Throughput: 0: 575.0. Samples: 355424. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:25:38,163][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:25:42,201][17545] Updated weights for policy 0, policy_version 350 (0.0020) |
|
[2023-08-05 17:25:43,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 1433600. Throughput: 0: 588.2. Samples: 357468. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:25:43,163][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:25:48,166][00231] Fps is (10 sec: 2456.2, 60 sec: 2320.8, 300 sec: 2318.7). Total num frames: 1445888. Throughput: 0: 591.1. Samples: 361644. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:25:48,169][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:25:53,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 1458176. Throughput: 0: 573.7. Samples: 364742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:25:53,170][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:25:58,160][00231] Fps is (10 sec: 2049.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1466368. Throughput: 0: 572.4. Samples: 366122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:25:58,168][00231] Avg episode reward: [(0, '0.798')] |
|
[2023-08-05 17:26:00,410][17545] Updated weights for policy 0, policy_version 360 (0.0054) |
|
[2023-08-05 17:26:03,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 1478656. Throughput: 0: 591.2. Samples: 369792. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:26:03,162][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:26:08,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.6, 300 sec: 2332.6). Total num frames: 1495040. Throughput: 0: 594.5. Samples: 374070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:26:08,163][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:26:13,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.2, 300 sec: 2304.9). Total num frames: 1503232. Throughput: 0: 584.7. Samples: 375690. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:26:13,164][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:26:17,786][17545] Updated weights for policy 0, policy_version 370 (0.0012) |
|
[2023-08-05 17:26:18,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 1515520. Throughput: 0: 577.2. Samples: 378538. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:26:18,163][00231] Avg episode reward: [(0, '0.794')] |
|
[2023-08-05 17:26:23,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2332.7). Total num frames: 1527808. Throughput: 0: 596.2. Samples: 382252. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:26:23,164][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:26:28,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1540096. Throughput: 0: 597.5. Samples: 384354. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:26:28,162][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:26:33,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 1552384. Throughput: 0: 588.5. Samples: 388122. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:26:33,166][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:26:34,428][17545] Updated weights for policy 0, policy_version 380 (0.0020) |
|
[2023-08-05 17:26:38,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 1560576. Throughput: 0: 582.3. Samples: 390944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:26:38,169][00231] Avg episode reward: [(0, '0.790')] |
|
[2023-08-05 17:26:43,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1572864. Throughput: 0: 586.6. Samples: 392520. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:26:43,169][00231] Avg episode reward: [(0, '0.815')] |
|
[2023-08-05 17:26:48,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.6, 300 sec: 2332.6). Total num frames: 1589248. Throughput: 0: 599.0. Samples: 396748. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:26:48,163][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:26:50,715][17545] Updated weights for policy 0, policy_version 390 (0.0026) |
|
[2023-08-05 17:26:53,162][00231] Fps is (10 sec: 2457.2, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 1597440. Throughput: 0: 586.5. Samples: 400462. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:26:53,165][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:26:58,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 1609728. Throughput: 0: 582.0. Samples: 401882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:26:58,169][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:27:03,160][00231] Fps is (10 sec: 2458.0, 60 sec: 2389.3, 300 sec: 2332.7). Total num frames: 1622016. Throughput: 0: 585.2. Samples: 404870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:27:03,167][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:27:08,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 1634304. Throughput: 0: 598.3. Samples: 409176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:27:08,166][00231] Avg episode reward: [(0, '0.794')] |
|
[2023-08-05 17:27:08,627][17545] Updated weights for policy 0, policy_version 400 (0.0025) |
|
[2023-08-05 17:27:13,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 1646592. Throughput: 0: 599.0. Samples: 411308. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:27:13,163][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:27:13,170][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000402_1646592.pth... |
|
[2023-08-05 17:27:13,352][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000263_1077248.pth |
|
[2023-08-05 17:27:18,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1654784. Throughput: 0: 579.6. Samples: 414206. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:27:18,163][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:27:23,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 1667072. Throughput: 0: 585.4. Samples: 417288. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:27:23,168][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:27:26,530][17545] Updated weights for policy 0, policy_version 410 (0.0032) |
|
[2023-08-05 17:27:28,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 1683456. Throughput: 0: 597.6. Samples: 419414. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:27:28,163][00231] Avg episode reward: [(0, '0.821')] |
|
[2023-08-05 17:27:33,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2346.6). Total num frames: 1695744. Throughput: 0: 596.3. Samples: 423582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:27:33,163][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:27:38,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 1703936. Throughput: 0: 577.2. Samples: 426436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:27:38,169][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:27:43,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 1712128. Throughput: 0: 576.6. Samples: 427828. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:27:43,167][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:27:44,978][17545] Updated weights for policy 0, policy_version 420 (0.0023) |
|
[2023-08-05 17:27:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2374.3). Total num frames: 1728512. Throughput: 0: 591.3. Samples: 431480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:27:48,163][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:27:53,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.4, 300 sec: 2360.4). Total num frames: 1740800. Throughput: 0: 591.2. Samples: 435780. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:27:53,168][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:27:58,161][00231] Fps is (10 sec: 2047.9, 60 sec: 2321.0, 300 sec: 2346.5). Total num frames: 1748992. Throughput: 0: 575.0. Samples: 437182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:27:58,166][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:28:02,855][17545] Updated weights for policy 0, policy_version 430 (0.0040) |
|
[2023-08-05 17:28:03,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2360.4). Total num frames: 1761280. Throughput: 0: 572.4. Samples: 439966. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:03,163][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:28:08,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2321.1, 300 sec: 2360.4). Total num frames: 1773568. Throughput: 0: 588.0. Samples: 443746. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:08,163][00231] Avg episode reward: [(0, '0.789')] |
|
[2023-08-05 17:28:13,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2374.3). Total num frames: 1789952. Throughput: 0: 588.2. Samples: 445882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:28:13,162][00231] Avg episode reward: [(0, '0.815')] |
|
[2023-08-05 17:28:18,163][00231] Fps is (10 sec: 2456.9, 60 sec: 2389.2, 300 sec: 2346.5). Total num frames: 1798144. Throughput: 0: 572.8. Samples: 449358. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:18,167][00231] Avg episode reward: [(0, '0.822')] |
|
[2023-08-05 17:28:20,201][17545] Updated weights for policy 0, policy_version 440 (0.0024) |
|
[2023-08-05 17:28:23,162][00231] Fps is (10 sec: 1638.1, 60 sec: 2321.0, 300 sec: 2346.5). Total num frames: 1806336. Throughput: 0: 570.0. Samples: 452086. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:23,168][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:28:28,161][00231] Fps is (10 sec: 2048.5, 60 sec: 2252.8, 300 sec: 2346.5). Total num frames: 1818624. Throughput: 0: 578.1. Samples: 453844. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:28:28,166][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:28:33,160][00231] Fps is (10 sec: 2458.1, 60 sec: 2252.8, 300 sec: 2346.5). Total num frames: 1830912. Throughput: 0: 588.5. Samples: 457964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:28:33,166][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:28:36,878][17545] Updated weights for policy 0, policy_version 450 (0.0021) |
|
[2023-08-05 17:28:38,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 1843200. Throughput: 0: 569.6. Samples: 461410. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:38,165][00231] Avg episode reward: [(0, '0.793')] |
|
[2023-08-05 17:28:43,167][00231] Fps is (10 sec: 2046.7, 60 sec: 2320.8, 300 sec: 2332.6). Total num frames: 1851392. Throughput: 0: 570.5. Samples: 462856. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:28:43,173][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:28:48,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2346.5). Total num frames: 1863680. Throughput: 0: 576.6. Samples: 465912. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:28:48,163][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:28:53,160][00231] Fps is (10 sec: 2869.1, 60 sec: 2321.1, 300 sec: 2360.4). Total num frames: 1880064. Throughput: 0: 587.5. Samples: 470184. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:53,163][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:28:54,200][17545] Updated weights for policy 0, policy_version 460 (0.0025) |
|
[2023-08-05 17:28:58,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.4, 300 sec: 2346.5). Total num frames: 1892352. Throughput: 0: 586.2. Samples: 472260. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:28:58,164][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:29:03,162][00231] Fps is (10 sec: 2047.6, 60 sec: 2321.0, 300 sec: 2332.6). Total num frames: 1900544. Throughput: 0: 571.3. Samples: 475064. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:29:03,166][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:29:08,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 1912832. Throughput: 0: 581.1. Samples: 478236. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:29:08,163][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:29:12,098][17545] Updated weights for policy 0, policy_version 470 (0.0016) |
|
[2023-08-05 17:29:13,160][00231] Fps is (10 sec: 2458.1, 60 sec: 2252.8, 300 sec: 2346.5). Total num frames: 1925120. Throughput: 0: 589.5. Samples: 480372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:29:13,163][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:29:13,177][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000470_1925120.pth... |
|
[2023-08-05 17:29:13,320][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000333_1363968.pth |
|
[2023-08-05 17:29:18,164][00231] Fps is (10 sec: 2456.7, 60 sec: 2321.0, 300 sec: 2332.6). Total num frames: 1937408. Throughput: 0: 591.8. Samples: 484598. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:29:18,167][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:29:23,163][00231] Fps is (10 sec: 2047.4, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 1945600. Throughput: 0: 578.2. Samples: 487432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:29:23,165][00231] Avg episode reward: [(0, '0.822')] |
|
[2023-08-05 17:29:28,162][00231] Fps is (10 sec: 2048.4, 60 sec: 2321.0, 300 sec: 2332.6). Total num frames: 1957888. Throughput: 0: 578.3. Samples: 488878. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-05 17:29:28,164][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:29:30,440][17545] Updated weights for policy 0, policy_version 480 (0.0017) |
|
[2023-08-05 17:29:33,160][00231] Fps is (10 sec: 2458.3, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 1970176. Throughput: 0: 597.6. Samples: 492802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:29:33,163][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:29:38,160][00231] Fps is (10 sec: 2867.7, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 1986560. Throughput: 0: 595.5. Samples: 496982. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:29:38,166][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:29:43,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.6, 300 sec: 2332.6). Total num frames: 1994752. Throughput: 0: 580.9. Samples: 498400. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:29:43,167][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:29:48,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2002944. Throughput: 0: 581.4. Samples: 501226. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:29:48,167][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:29:48,404][17545] Updated weights for policy 0, policy_version 490 (0.0017) |
|
[2023-08-05 17:29:53,161][00231] Fps is (10 sec: 2457.5, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 2019328. Throughput: 0: 599.3. Samples: 505204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:29:53,168][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:29:58,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2031616. Throughput: 0: 598.9. Samples: 507324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:29:58,165][00231] Avg episode reward: [(0, '0.798')] |
|
[2023-08-05 17:30:03,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2389.4, 300 sec: 2346.6). Total num frames: 2043904. Throughput: 0: 579.9. Samples: 510692. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:30:03,164][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:30:04,750][17545] Updated weights for policy 0, policy_version 500 (0.0019) |
|
[2023-08-05 17:30:08,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2332.7). Total num frames: 2052096. Throughput: 0: 578.8. Samples: 513478. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:30:08,168][00231] Avg episode reward: [(0, '0.801')] |
|
[2023-08-05 17:30:13,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 2064384. Throughput: 0: 588.4. Samples: 515356. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:30:13,163][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:30:18,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.5, 300 sec: 2346.5). Total num frames: 2080768. Throughput: 0: 597.3. Samples: 519682. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:30:18,168][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:30:21,519][17545] Updated weights for policy 0, policy_version 510 (0.0018) |
|
[2023-08-05 17:30:23,163][00231] Fps is (10 sec: 2456.9, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 2088960. Throughput: 0: 575.6. Samples: 522884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:30:23,167][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:30:28,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.4, 300 sec: 2346.5). Total num frames: 2101248. Throughput: 0: 575.3. Samples: 524288. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:30:28,164][00231] Avg episode reward: [(0, '0.825')] |
|
[2023-08-05 17:30:33,160][00231] Fps is (10 sec: 2048.6, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2109440. Throughput: 0: 584.4. Samples: 527526. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:30:33,162][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:30:38,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 2125824. Throughput: 0: 590.0. Samples: 531752. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:30:38,163][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:30:39,251][17545] Updated weights for policy 0, policy_version 520 (0.0025) |
|
[2023-08-05 17:30:43,165][00231] Fps is (10 sec: 2456.4, 60 sec: 2320.9, 300 sec: 2332.6). Total num frames: 2134016. Throughput: 0: 583.8. Samples: 533600. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:30:43,173][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:30:48,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 2146304. Throughput: 0: 570.8. Samples: 536380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:30:48,167][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:30:53,160][00231] Fps is (10 sec: 2458.8, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 2158592. Throughput: 0: 582.2. Samples: 539676. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:30:53,162][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:30:57,344][17545] Updated weights for policy 0, policy_version 530 (0.0016) |
|
[2023-08-05 17:30:58,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 2170880. Throughput: 0: 587.4. Samples: 541788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:30:58,163][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:31:03,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2183168. Throughput: 0: 579.7. Samples: 545770. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:31:03,164][00231] Avg episode reward: [(0, '0.820')] |
|
[2023-08-05 17:31:08,168][00231] Fps is (10 sec: 2046.4, 60 sec: 2320.8, 300 sec: 2332.6). Total num frames: 2191360. Throughput: 0: 562.6. Samples: 548202. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:31:08,179][00231] Avg episode reward: [(0, '0.801')] |
|
[2023-08-05 17:31:13,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 2199552. Throughput: 0: 551.6. Samples: 549112. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:31:13,165][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:31:13,176][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000537_2199552.pth... |
|
[2023-08-05 17:31:13,338][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000402_1646592.pth |
|
[2023-08-05 17:31:16,958][17545] Updated weights for policy 0, policy_version 540 (0.0029) |
|
[2023-08-05 17:31:18,162][00231] Fps is (10 sec: 2049.2, 60 sec: 2184.5, 300 sec: 2318.7). Total num frames: 2211840. Throughput: 0: 567.4. Samples: 553058. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:31:18,165][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:31:23,162][00231] Fps is (10 sec: 2457.1, 60 sec: 2252.8, 300 sec: 2318.7). Total num frames: 2224128. Throughput: 0: 560.9. Samples: 556994. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:31:23,172][00231] Avg episode reward: [(0, '0.787')] |
|
[2023-08-05 17:31:28,160][00231] Fps is (10 sec: 2458.1, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 2236416. Throughput: 0: 550.4. Samples: 558366. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:31:28,172][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:31:33,160][00231] Fps is (10 sec: 2048.4, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 2244608. Throughput: 0: 551.2. Samples: 561182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:31:33,165][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:31:35,171][17545] Updated weights for policy 0, policy_version 550 (0.0014) |
|
[2023-08-05 17:31:38,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2252.8, 300 sec: 2332.6). Total num frames: 2260992. Throughput: 0: 568.2. Samples: 565246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:31:38,162][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:31:43,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.3, 300 sec: 2318.8). Total num frames: 2273280. Throughput: 0: 568.4. Samples: 567364. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:31:43,167][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:31:48,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 2281472. Throughput: 0: 550.8. Samples: 570556. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:31:48,170][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:31:53,166][00231] Fps is (10 sec: 1637.5, 60 sec: 2184.3, 300 sec: 2304.8). Total num frames: 2289664. Throughput: 0: 561.0. Samples: 573446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:31:53,172][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:31:53,314][17545] Updated weights for policy 0, policy_version 560 (0.0030) |
|
[2023-08-05 17:31:58,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 2306048. Throughput: 0: 584.1. Samples: 575396. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:31:58,168][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:32:03,161][00231] Fps is (10 sec: 2868.5, 60 sec: 2252.8, 300 sec: 2318.7). Total num frames: 2318336. Throughput: 0: 591.8. Samples: 579688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:32:03,164][00231] Avg episode reward: [(0, '0.798')] |
|
[2023-08-05 17:32:08,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.4, 300 sec: 2318.8). Total num frames: 2330624. Throughput: 0: 574.2. Samples: 582830. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:32:08,167][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:32:10,168][17545] Updated weights for policy 0, policy_version 570 (0.0025) |
|
[2023-08-05 17:32:13,160][00231] Fps is (10 sec: 2048.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2338816. Throughput: 0: 574.8. Samples: 584230. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:32:13,165][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:32:18,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2351104. Throughput: 0: 591.3. Samples: 587792. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:32:18,163][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:32:23,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.4, 300 sec: 2318.8). Total num frames: 2367488. Throughput: 0: 598.6. Samples: 592182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:32:23,162][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:32:26,437][17545] Updated weights for policy 0, policy_version 580 (0.0016) |
|
[2023-08-05 17:32:28,164][00231] Fps is (10 sec: 2456.7, 60 sec: 2320.9, 300 sec: 2304.8). Total num frames: 2375680. Throughput: 0: 589.2. Samples: 593882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:32:28,167][00231] Avg episode reward: [(0, '0.824')] |
|
[2023-08-05 17:32:33,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 2387968. Throughput: 0: 580.3. Samples: 596668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:32:33,163][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:32:38,160][00231] Fps is (10 sec: 2458.5, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2400256. Throughput: 0: 594.5. Samples: 600196. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:32:38,162][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:32:43,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2412544. Throughput: 0: 599.2. Samples: 602358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:32:43,162][00231] Avg episode reward: [(0, '0.819')] |
|
[2023-08-05 17:32:43,541][17545] Updated weights for policy 0, policy_version 590 (0.0020) |
|
[2023-08-05 17:32:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 2424832. Throughput: 0: 588.4. Samples: 606166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:32:48,174][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:32:53,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.6, 300 sec: 2318.8). Total num frames: 2433024. Throughput: 0: 580.9. Samples: 608970. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:32:53,165][00231] Avg episode reward: [(0, '0.819')] |
|
[2023-08-05 17:32:58,161][00231] Fps is (10 sec: 2047.9, 60 sec: 2321.1, 300 sec: 2318.7). Total num frames: 2445312. Throughput: 0: 581.8. Samples: 610412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:32:58,166][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:33:01,733][17545] Updated weights for policy 0, policy_version 600 (0.0015) |
|
[2023-08-05 17:33:03,170][00231] Fps is (10 sec: 2864.5, 60 sec: 2389.0, 300 sec: 2332.6). Total num frames: 2461696. Throughput: 0: 595.3. Samples: 614588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:33:03,173][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:33:08,163][00231] Fps is (10 sec: 2456.9, 60 sec: 2320.9, 300 sec: 2304.8). Total num frames: 2469888. Throughput: 0: 583.4. Samples: 618436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:33:08,168][00231] Avg episode reward: [(0, '0.802')] |
|
[2023-08-05 17:33:13,162][00231] Fps is (10 sec: 2049.5, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 2482176. Throughput: 0: 577.9. Samples: 619886. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:33:13,167][00231] Avg episode reward: [(0, '0.794')] |
|
[2023-08-05 17:33:13,183][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000606_2482176.pth... |
|
[2023-08-05 17:33:13,329][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000470_1925120.pth |
|
[2023-08-05 17:33:18,160][00231] Fps is (10 sec: 2048.7, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2490368. Throughput: 0: 578.7. Samples: 622710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:33:18,163][00231] Avg episode reward: [(0, '0.817')] |
|
[2023-08-05 17:33:19,972][17545] Updated weights for policy 0, policy_version 610 (0.0035) |
|
[2023-08-05 17:33:23,160][00231] Fps is (10 sec: 2458.1, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2506752. Throughput: 0: 592.7. Samples: 626866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:33:23,163][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:33:28,166][00231] Fps is (10 sec: 2865.5, 60 sec: 2389.2, 300 sec: 2332.6). Total num frames: 2519040. Throughput: 0: 591.9. Samples: 628996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:33:28,169][00231] Avg episode reward: [(0, '0.821')] |
|
[2023-08-05 17:33:33,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2527232. Throughput: 0: 575.8. Samples: 632076. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:33:33,164][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:33:37,993][17545] Updated weights for policy 0, policy_version 620 (0.0016) |
|
[2023-08-05 17:33:38,160][00231] Fps is (10 sec: 2049.2, 60 sec: 2321.1, 300 sec: 2332.7). Total num frames: 2539520. Throughput: 0: 577.6. Samples: 634964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:33:38,162][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:33:43,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2551808. Throughput: 0: 591.6. Samples: 637036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:33:43,163][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:33:48,161][00231] Fps is (10 sec: 2457.4, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 2564096. Throughput: 0: 591.6. Samples: 641206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:33:48,168][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:33:53,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 2576384. Throughput: 0: 573.2. Samples: 644228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:33:53,170][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:33:54,916][17545] Updated weights for policy 0, policy_version 630 (0.0032) |
|
[2023-08-05 17:33:58,160][00231] Fps is (10 sec: 2048.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2584576. Throughput: 0: 571.4. Samples: 645596. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:33:58,168][00231] Avg episode reward: [(0, '0.832')] |
|
[2023-08-05 17:33:58,171][17532] Saving new best policy, reward=0.832! |
|
[2023-08-05 17:34:03,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2253.2, 300 sec: 2318.8). Total num frames: 2596864. Throughput: 0: 587.9. Samples: 649164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:34:03,162][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:34:08,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.2, 300 sec: 2318.8). Total num frames: 2609152. Throughput: 0: 589.1. Samples: 653376. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:34:08,165][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:34:12,412][17545] Updated weights for policy 0, policy_version 640 (0.0029) |
|
[2023-08-05 17:34:13,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2621440. Throughput: 0: 575.7. Samples: 654898. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:34:13,169][00231] Avg episode reward: [(0, '0.827')] |
|
[2023-08-05 17:34:18,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2629632. Throughput: 0: 571.4. Samples: 657788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:34:18,169][00231] Avg episode reward: [(0, '0.829')] |
|
[2023-08-05 17:34:23,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 2641920. Throughput: 0: 586.7. Samples: 661366. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:34:23,168][00231] Avg episode reward: [(0, '0.787')] |
|
[2023-08-05 17:34:28,163][00231] Fps is (10 sec: 2866.4, 60 sec: 2321.2, 300 sec: 2332.6). Total num frames: 2658304. Throughput: 0: 587.2. Samples: 663460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:34:28,166][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:34:28,804][17545] Updated weights for policy 0, policy_version 650 (0.0021) |
|
[2023-08-05 17:34:33,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 2666496. Throughput: 0: 573.6. Samples: 667018. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:34:33,163][00231] Avg episode reward: [(0, '0.790')] |
|
[2023-08-05 17:34:38,160][00231] Fps is (10 sec: 2048.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2678784. Throughput: 0: 569.6. Samples: 669860. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:34:38,168][00231] Avg episode reward: [(0, '0.818')] |
|
[2023-08-05 17:34:43,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2691072. Throughput: 0: 571.9. Samples: 671332. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-05 17:34:43,163][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:34:47,349][17545] Updated weights for policy 0, policy_version 660 (0.0033) |
|
[2023-08-05 17:34:48,162][00231] Fps is (10 sec: 2457.1, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 2703360. Throughput: 0: 586.2. Samples: 675544. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:34:48,165][00231] Avg episode reward: [(0, '0.793')] |
|
[2023-08-05 17:34:53,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2715648. Throughput: 0: 574.0. Samples: 679206. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:34:53,163][00231] Avg episode reward: [(0, '0.831')] |
|
[2023-08-05 17:34:58,160][00231] Fps is (10 sec: 2048.5, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 2723840. Throughput: 0: 571.4. Samples: 680610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:34:58,163][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:35:03,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2736128. Throughput: 0: 573.4. Samples: 683592. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:03,163][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:35:05,738][17545] Updated weights for policy 0, policy_version 670 (0.0013) |
|
[2023-08-05 17:35:08,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2748416. Throughput: 0: 587.7. Samples: 687814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:35:08,168][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:35:13,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 2760704. Throughput: 0: 589.1. Samples: 689966. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:13,165][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:35:13,177][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000674_2760704.pth... |
|
[2023-08-05 17:35:13,182][00231] No heartbeat for components: RolloutWorker_w1 (1177 seconds) |
|
[2023-08-05 17:35:13,344][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000537_2199552.pth |
|
[2023-08-05 17:35:18,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 2768896. Throughput: 0: 573.1. Samples: 692808. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:35:18,165][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:35:23,161][00231] Fps is (10 sec: 2047.8, 60 sec: 2321.0, 300 sec: 2304.9). Total num frames: 2781184. Throughput: 0: 578.6. Samples: 695896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:35:23,166][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:35:23,476][17545] Updated weights for policy 0, policy_version 680 (0.0019) |
|
[2023-08-05 17:35:28,161][00231] Fps is (10 sec: 2867.1, 60 sec: 2321.2, 300 sec: 2332.6). Total num frames: 2797568. Throughput: 0: 593.7. Samples: 698048. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:28,163][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:35:33,165][00231] Fps is (10 sec: 2866.1, 60 sec: 2389.1, 300 sec: 2318.7). Total num frames: 2809856. Throughput: 0: 594.0. Samples: 702276. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:33,171][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:35:38,160][00231] Fps is (10 sec: 2048.1, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2818048. Throughput: 0: 576.4. Samples: 705144. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:38,164][00231] Avg episode reward: [(0, '0.792')] |
|
[2023-08-05 17:35:41,393][17545] Updated weights for policy 0, policy_version 690 (0.0029) |
|
[2023-08-05 17:35:43,175][00231] Fps is (10 sec: 1636.8, 60 sec: 2252.2, 300 sec: 2304.8). Total num frames: 2826240. Throughput: 0: 576.1. Samples: 706542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:43,177][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:35:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.2, 300 sec: 2318.8). Total num frames: 2842624. Throughput: 0: 592.5. Samples: 710256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:35:48,164][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:35:53,162][00231] Fps is (10 sec: 2871.0, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 2854912. Throughput: 0: 591.8. Samples: 714448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:35:53,164][00231] Avg episode reward: [(0, '0.791')] |
|
[2023-08-05 17:35:58,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 2863104. Throughput: 0: 577.5. Samples: 715954. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:35:58,163][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:35:58,321][17545] Updated weights for policy 0, policy_version 700 (0.0029) |
|
[2023-08-05 17:36:03,161][00231] Fps is (10 sec: 2048.2, 60 sec: 2321.0, 300 sec: 2318.8). Total num frames: 2875392. Throughput: 0: 576.3. Samples: 718742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:36:03,164][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:36:08,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2887680. Throughput: 0: 590.2. Samples: 722454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:36:08,163][00231] Avg episode reward: [(0, '0.801')] |
|
[2023-08-05 17:36:13,160][00231] Fps is (10 sec: 2867.5, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 2904064. Throughput: 0: 589.1. Samples: 724556. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:36:13,163][00231] Avg episode reward: [(0, '0.817')] |
|
[2023-08-05 17:36:15,167][17545] Updated weights for policy 0, policy_version 710 (0.0021) |
|
[2023-08-05 17:36:18,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2332.7). Total num frames: 2912256. Throughput: 0: 573.9. Samples: 728098. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:36:18,163][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:36:23,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2920448. Throughput: 0: 571.6. Samples: 730868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:36:23,163][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:36:28,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2332.6). Total num frames: 2932736. Throughput: 0: 574.3. Samples: 732376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:36:28,167][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:36:33,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2253.0, 300 sec: 2318.8). Total num frames: 2945024. Throughput: 0: 584.5. Samples: 736560. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:36:33,162][00231] Avg episode reward: [(0, '0.815')] |
|
[2023-08-05 17:36:33,218][17545] Updated weights for policy 0, policy_version 720 (0.0042) |
|
[2023-08-05 17:36:38,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 2957312. Throughput: 0: 570.6. Samples: 740122. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:36:38,163][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:36:43,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.6, 300 sec: 2318.8). Total num frames: 2965504. Throughput: 0: 568.3. Samples: 741528. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:36:43,169][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:36:48,168][00231] Fps is (10 sec: 2046.4, 60 sec: 2252.5, 300 sec: 2332.6). Total num frames: 2977792. Throughput: 0: 572.4. Samples: 744502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:36:48,170][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:36:51,432][17545] Updated weights for policy 0, policy_version 730 (0.0019) |
|
[2023-08-05 17:36:53,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 2994176. Throughput: 0: 582.9. Samples: 748686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:36:53,168][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:36:58,160][00231] Fps is (10 sec: 2869.4, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 3006464. Throughput: 0: 583.2. Samples: 750802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:36:58,169][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:37:03,160][00231] Fps is (10 sec: 2047.9, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3014656. Throughput: 0: 567.7. Samples: 753644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:37:03,163][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:37:08,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2252.8, 300 sec: 2318.8). Total num frames: 3022848. Throughput: 0: 569.7. Samples: 756504. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-08-05 17:37:08,168][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:37:09,790][17545] Updated weights for policy 0, policy_version 740 (0.0022) |
|
[2023-08-05 17:37:13,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2252.8, 300 sec: 2332.6). Total num frames: 3039232. Throughput: 0: 582.3. Samples: 758580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:37:13,164][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:37:13,180][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000742_3039232.pth... |
|
[2023-08-05 17:37:13,300][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000606_2482176.pth |
|
[2023-08-05 17:37:18,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3051520. Throughput: 0: 582.0. Samples: 762750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:37:18,162][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:37:23,167][00231] Fps is (10 sec: 2046.6, 60 sec: 2320.8, 300 sec: 2318.7). Total num frames: 3059712. Throughput: 0: 569.0. Samples: 765730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:37:23,169][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:37:28,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2252.8, 300 sec: 2304.9). Total num frames: 3067904. Throughput: 0: 566.4. Samples: 767018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:37:28,162][00231] Avg episode reward: [(0, '0.818')] |
|
[2023-08-05 17:37:28,365][17545] Updated weights for policy 0, policy_version 750 (0.0020) |
|
[2023-08-05 17:37:33,160][00231] Fps is (10 sec: 2459.3, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3084288. Throughput: 0: 582.1. Samples: 770690. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:37:33,162][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:37:38,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3096576. Throughput: 0: 580.8. Samples: 774822. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:37:38,166][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:37:43,162][00231] Fps is (10 sec: 2457.1, 60 sec: 2389.3, 300 sec: 2318.7). Total num frames: 3108864. Throughput: 0: 567.3. Samples: 776330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:37:43,168][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:37:45,337][17545] Updated weights for policy 0, policy_version 760 (0.0029) |
|
[2023-08-05 17:37:48,161][00231] Fps is (10 sec: 2047.8, 60 sec: 2321.3, 300 sec: 2318.7). Total num frames: 3117056. Throughput: 0: 567.2. Samples: 779168. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:37:48,169][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:37:53,163][00231] Fps is (10 sec: 2047.8, 60 sec: 2252.7, 300 sec: 2318.7). Total num frames: 3129344. Throughput: 0: 585.3. Samples: 782842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:37:53,172][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:37:58,160][00231] Fps is (10 sec: 2867.5, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3145728. Throughput: 0: 586.2. Samples: 784958. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:37:58,164][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:38:01,996][17545] Updated weights for policy 0, policy_version 770 (0.0021) |
|
[2023-08-05 17:38:03,160][00231] Fps is (10 sec: 2458.3, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3153920. Throughput: 0: 574.4. Samples: 788598. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:38:03,165][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:38:08,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 3162112. Throughput: 0: 571.2. Samples: 791428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:38:08,164][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:38:13,161][00231] Fps is (10 sec: 2047.8, 60 sec: 2252.8, 300 sec: 2318.7). Total num frames: 3174400. Throughput: 0: 578.3. Samples: 793044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:38:13,167][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:38:18,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3190784. Throughput: 0: 590.6. Samples: 797266. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:38:18,170][00231] Avg episode reward: [(0, '0.790')] |
|
[2023-08-05 17:38:19,015][17545] Updated weights for policy 0, policy_version 780 (0.0014) |
|
[2023-08-05 17:38:23,163][00231] Fps is (10 sec: 2866.7, 60 sec: 2389.5, 300 sec: 2318.8). Total num frames: 3203072. Throughput: 0: 578.8. Samples: 800870. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:38:23,166][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:38:28,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 3211264. Throughput: 0: 576.7. Samples: 802282. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:38:28,165][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:38:33,160][00231] Fps is (10 sec: 2048.6, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3223552. Throughput: 0: 582.5. Samples: 805382. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:38:33,165][00231] Avg episode reward: [(0, '0.790')] |
|
[2023-08-05 17:38:37,029][17545] Updated weights for policy 0, policy_version 790 (0.0025) |
|
[2023-08-05 17:38:38,161][00231] Fps is (10 sec: 2457.4, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 3235840. Throughput: 0: 594.4. Samples: 809590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:38:38,168][00231] Avg episode reward: [(0, '0.794')] |
|
[2023-08-05 17:38:43,164][00231] Fps is (10 sec: 2456.6, 60 sec: 2321.0, 300 sec: 2318.7). Total num frames: 3248128. Throughput: 0: 594.7. Samples: 811722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:38:43,172][00231] Avg episode reward: [(0, '0.816')] |
|
[2023-08-05 17:38:48,160][00231] Fps is (10 sec: 2048.1, 60 sec: 2321.1, 300 sec: 2304.9). Total num frames: 3256320. Throughput: 0: 577.9. Samples: 814604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:38:48,163][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:38:53,160][00231] Fps is (10 sec: 2048.8, 60 sec: 2321.2, 300 sec: 2318.8). Total num frames: 3268608. Throughput: 0: 580.5. Samples: 817550. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:38:53,163][00231] Avg episode reward: [(0, '0.831')] |
|
[2023-08-05 17:38:55,204][17545] Updated weights for policy 0, policy_version 800 (0.0030) |
|
[2023-08-05 17:38:58,163][00231] Fps is (10 sec: 2866.5, 60 sec: 2321.0, 300 sec: 2332.6). Total num frames: 3284992. Throughput: 0: 591.7. Samples: 819672. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:38:58,165][00231] Avg episode reward: [(0, '0.819')] |
|
[2023-08-05 17:39:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 3297280. Throughput: 0: 594.5. Samples: 824020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:39:03,170][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:39:08,160][00231] Fps is (10 sec: 2048.5, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 3305472. Throughput: 0: 578.3. Samples: 826892. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:39:08,168][00231] Avg episode reward: [(0, '0.818')] |
|
[2023-08-05 17:39:13,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3313664. Throughput: 0: 567.4. Samples: 827816. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:39:13,162][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:39:13,175][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000809_3313664.pth... |
|
[2023-08-05 17:39:13,301][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000674_2760704.pth |
|
[2023-08-05 17:39:14,062][17545] Updated weights for policy 0, policy_version 810 (0.0017) |
|
[2023-08-05 17:39:18,161][00231] Fps is (10 sec: 2457.5, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 3330048. Throughput: 0: 584.0. Samples: 831664. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:39:18,162][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:39:23,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.2, 300 sec: 2318.8). Total num frames: 3342336. Throughput: 0: 585.6. Samples: 835942. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:39:23,168][00231] Avg episode reward: [(0, '0.817')] |
|
[2023-08-05 17:39:28,163][00231] Fps is (10 sec: 2047.4, 60 sec: 2320.9, 300 sec: 2318.7). Total num frames: 3350528. Throughput: 0: 569.0. Samples: 837326. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:39:28,166][00231] Avg episode reward: [(0, '0.808')] |
|
[2023-08-05 17:39:31,231][17545] Updated weights for policy 0, policy_version 820 (0.0015) |
|
[2023-08-05 17:39:33,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2252.8, 300 sec: 2304.9). Total num frames: 3358720. Throughput: 0: 566.7. Samples: 840104. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:39:33,164][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:39:38,160][00231] Fps is (10 sec: 2458.4, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3375104. Throughput: 0: 588.6. Samples: 844036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:39:38,162][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:39:43,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.2, 300 sec: 2318.8). Total num frames: 3387392. Throughput: 0: 589.5. Samples: 846198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:39:43,167][00231] Avg episode reward: [(0, '0.797')] |
|
[2023-08-05 17:39:47,920][17545] Updated weights for policy 0, policy_version 830 (0.0013) |
|
[2023-08-05 17:39:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2318.8). Total num frames: 3399680. Throughput: 0: 569.0. Samples: 849626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:39:48,165][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:39:53,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3407872. Throughput: 0: 570.1. Samples: 852548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:39:53,169][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:39:58,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2252.9, 300 sec: 2318.8). Total num frames: 3420160. Throughput: 0: 590.8. Samples: 854402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:39:58,162][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:40:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 3436544. Throughput: 0: 600.7. Samples: 858694. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:40:03,163][00231] Avg episode reward: [(0, '0.836')] |
|
[2023-08-05 17:40:03,178][17532] Saving new best policy, reward=0.836! |
|
[2023-08-05 17:40:03,966][17545] Updated weights for policy 0, policy_version 840 (0.0014) |
|
[2023-08-05 17:40:08,166][00231] Fps is (10 sec: 2456.3, 60 sec: 2320.9, 300 sec: 2318.7). Total num frames: 3444736. Throughput: 0: 580.6. Samples: 862072. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:40:08,175][00231] Avg episode reward: [(0, '0.815')] |
|
[2023-08-05 17:40:13,163][00231] Fps is (10 sec: 2047.4, 60 sec: 2389.2, 300 sec: 2332.6). Total num frames: 3457024. Throughput: 0: 581.3. Samples: 863484. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:40:13,165][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:40:18,160][00231] Fps is (10 sec: 2459.0, 60 sec: 2321.1, 300 sec: 2332.6). Total num frames: 3469312. Throughput: 0: 592.8. Samples: 866780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:40:18,163][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:40:22,188][17545] Updated weights for policy 0, policy_version 850 (0.0019) |
|
[2023-08-05 17:40:23,160][00231] Fps is (10 sec: 2458.3, 60 sec: 2321.1, 300 sec: 2318.8). Total num frames: 3481600. Throughput: 0: 602.8. Samples: 871162. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:40:23,163][00231] Avg episode reward: [(0, '0.794')] |
|
[2023-08-05 17:40:28,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.5, 300 sec: 2318.8). Total num frames: 3493888. Throughput: 0: 601.2. Samples: 873252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:40:28,168][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:40:33,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2457.6, 300 sec: 2332.6). Total num frames: 3506176. Throughput: 0: 589.6. Samples: 876156. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:40:33,166][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:40:38,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2332.8). Total num frames: 3514368. Throughput: 0: 600.6. Samples: 879574. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:40:38,163][00231] Avg episode reward: [(0, '0.801')] |
|
[2023-08-05 17:40:39,434][17545] Updated weights for policy 0, policy_version 860 (0.0022) |
|
[2023-08-05 17:40:43,162][00231] Fps is (10 sec: 2457.2, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 3530752. Throughput: 0: 609.1. Samples: 881812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:40:43,165][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:40:48,168][00231] Fps is (10 sec: 2865.1, 60 sec: 2389.0, 300 sec: 2332.6). Total num frames: 3543040. Throughput: 0: 606.2. Samples: 885978. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:40:48,171][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:40:53,160][00231] Fps is (10 sec: 2048.3, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 3551232. Throughput: 0: 595.6. Samples: 888872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:40:53,163][00231] Avg episode reward: [(0, '0.786')] |
|
[2023-08-05 17:40:56,725][17545] Updated weights for policy 0, policy_version 870 (0.0024) |
|
[2023-08-05 17:40:58,160][00231] Fps is (10 sec: 2049.5, 60 sec: 2389.3, 300 sec: 2332.6). Total num frames: 3563520. Throughput: 0: 597.5. Samples: 890368. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:40:58,163][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:41:03,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 3579904. Throughput: 0: 617.1. Samples: 894550. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:41:03,163][00231] Avg episode reward: [(0, '0.824')] |
|
[2023-08-05 17:41:08,164][00231] Fps is (10 sec: 2866.1, 60 sec: 2457.7, 300 sec: 2332.6). Total num frames: 3592192. Throughput: 0: 610.6. Samples: 898640. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:41:08,170][00231] Avg episode reward: [(0, '0.801')] |
|
[2023-08-05 17:41:13,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.4, 300 sec: 2332.6). Total num frames: 3600384. Throughput: 0: 596.4. Samples: 900092. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:41:13,163][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:41:13,180][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000879_3600384.pth... |
|
[2023-08-05 17:41:13,354][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000742_3039232.pth |
|
[2023-08-05 17:41:13,966][17545] Updated weights for policy 0, policy_version 880 (0.0017) |
|
[2023-08-05 17:41:18,165][00231] Fps is (10 sec: 2047.8, 60 sec: 2389.1, 300 sec: 2346.5). Total num frames: 3612672. Throughput: 0: 594.0. Samples: 902890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-05 17:41:18,167][00231] Avg episode reward: [(0, '0.813')] |
|
[2023-08-05 17:41:23,163][00231] Fps is (10 sec: 2457.0, 60 sec: 2389.2, 300 sec: 2346.5). Total num frames: 3624960. Throughput: 0: 612.4. Samples: 907132. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:41:23,165][00231] Avg episode reward: [(0, '0.798')] |
|
[2023-08-05 17:41:28,161][00231] Fps is (10 sec: 2868.4, 60 sec: 2457.6, 300 sec: 2360.4). Total num frames: 3641344. Throughput: 0: 611.2. Samples: 909314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:41:28,169][00231] Avg episode reward: [(0, '0.824')] |
|
[2023-08-05 17:41:29,387][17545] Updated weights for policy 0, policy_version 890 (0.0026) |
|
[2023-08-05 17:41:33,161][00231] Fps is (10 sec: 2458.2, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 3649536. Throughput: 0: 593.9. Samples: 912698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:41:33,165][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:41:38,160][00231] Fps is (10 sec: 2048.2, 60 sec: 2457.6, 300 sec: 2360.4). Total num frames: 3661824. Throughput: 0: 595.4. Samples: 915666. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:41:38,163][00231] Avg episode reward: [(0, '0.804')] |
|
[2023-08-05 17:41:43,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2389.4, 300 sec: 2360.5). Total num frames: 3674112. Throughput: 0: 608.3. Samples: 917742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:41:43,163][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:41:46,572][17545] Updated weights for policy 0, policy_version 900 (0.0013) |
|
[2023-08-05 17:41:48,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2457.9, 300 sec: 2360.4). Total num frames: 3690496. Throughput: 0: 609.1. Samples: 921958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:41:48,163][00231] Avg episode reward: [(0, '0.810')] |
|
[2023-08-05 17:41:53,161][00231] Fps is (10 sec: 2457.5, 60 sec: 2457.6, 300 sec: 2346.5). Total num frames: 3698688. Throughput: 0: 589.8. Samples: 925178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:41:53,163][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:41:58,160][00231] Fps is (10 sec: 1638.4, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 3706880. Throughput: 0: 588.9. Samples: 926594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:41:58,163][00231] Avg episode reward: [(0, '0.805')] |
|
[2023-08-05 17:42:03,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2389.3, 300 sec: 2374.3). Total num frames: 3723264. Throughput: 0: 606.6. Samples: 930186. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:42:03,163][00231] Avg episode reward: [(0, '0.819')] |
|
[2023-08-05 17:42:04,352][17545] Updated weights for policy 0, policy_version 910 (0.0018) |
|
[2023-08-05 17:42:08,161][00231] Fps is (10 sec: 2867.0, 60 sec: 2389.5, 300 sec: 2360.4). Total num frames: 3735552. Throughput: 0: 607.9. Samples: 934488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:42:08,164][00231] Avg episode reward: [(0, '0.796')] |
|
[2023-08-05 17:42:13,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2346.5). Total num frames: 3743744. Throughput: 0: 597.0. Samples: 936178. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:42:13,167][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:42:18,162][00231] Fps is (10 sec: 2047.8, 60 sec: 2389.5, 300 sec: 2360.4). Total num frames: 3756032. Throughput: 0: 585.7. Samples: 939054. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:42:18,165][00231] Avg episode reward: [(0, '0.807')] |
|
[2023-08-05 17:42:21,920][17545] Updated weights for policy 0, policy_version 920 (0.0014) |
|
[2023-08-05 17:42:23,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.4, 300 sec: 2374.3). Total num frames: 3768320. Throughput: 0: 602.0. Samples: 942754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:42:23,165][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:42:28,160][00231] Fps is (10 sec: 2458.1, 60 sec: 2321.1, 300 sec: 2360.4). Total num frames: 3780608. Throughput: 0: 603.5. Samples: 944900. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-05 17:42:28,163][00231] Avg episode reward: [(0, '0.814')] |
|
[2023-08-05 17:42:33,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2360.4). Total num frames: 3792896. Throughput: 0: 575.2. Samples: 947840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:42:33,166][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:42:38,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 3801088. Throughput: 0: 568.0. Samples: 950736. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:42:38,163][00231] Avg episode reward: [(0, '0.824')] |
|
[2023-08-05 17:42:41,169][17545] Updated weights for policy 0, policy_version 930 (0.0016) |
|
[2023-08-05 17:42:43,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2360.4). Total num frames: 3813376. Throughput: 0: 572.8. Samples: 952368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:42:43,170][00231] Avg episode reward: [(0, '0.806')] |
|
[2023-08-05 17:42:48,160][00231] Fps is (10 sec: 2457.6, 60 sec: 2252.8, 300 sec: 2360.4). Total num frames: 3825664. Throughput: 0: 587.5. Samples: 956624. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:42:48,163][00231] Avg episode reward: [(0, '0.836')] |
|
[2023-08-05 17:42:53,161][00231] Fps is (10 sec: 2457.3, 60 sec: 2321.0, 300 sec: 2346.5). Total num frames: 3837952. Throughput: 0: 570.8. Samples: 960174. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:42:53,164][00231] Avg episode reward: [(0, '0.799')] |
|
[2023-08-05 17:42:58,170][00231] Fps is (10 sec: 2046.0, 60 sec: 2320.7, 300 sec: 2346.4). Total num frames: 3846144. Throughput: 0: 564.1. Samples: 961566. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:42:58,177][00231] Avg episode reward: [(0, '0.811')] |
|
[2023-08-05 17:42:58,213][17545] Updated weights for policy 0, policy_version 940 (0.0020) |
|
[2023-08-05 17:43:03,161][00231] Fps is (10 sec: 2457.8, 60 sec: 2321.0, 300 sec: 2374.3). Total num frames: 3862528. Throughput: 0: 572.6. Samples: 964818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:43:03,168][00231] Avg episode reward: [(0, '0.809')] |
|
[2023-08-05 17:43:08,160][00231] Fps is (10 sec: 2870.0, 60 sec: 2321.1, 300 sec: 2374.3). Total num frames: 3874816. Throughput: 0: 585.9. Samples: 969118. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-05 17:43:08,167][00231] Avg episode reward: [(0, '0.795')] |
|
[2023-08-05 17:43:13,160][00231] Fps is (10 sec: 2457.7, 60 sec: 2389.3, 300 sec: 2360.4). Total num frames: 3887104. Throughput: 0: 585.6. Samples: 971252. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:43:13,162][00231] Avg episode reward: [(0, '0.784')] |
|
[2023-08-05 17:43:13,186][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000949_3887104.pth... |
|
[2023-08-05 17:43:13,367][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000809_3313664.pth |
|
[2023-08-05 17:43:14,076][17545] Updated weights for policy 0, policy_version 950 (0.0014) |
|
[2023-08-05 17:43:18,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2346.5). Total num frames: 3895296. Throughput: 0: 585.5. Samples: 974188. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:43:18,165][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:43:23,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2360.4). Total num frames: 3907584. Throughput: 0: 595.8. Samples: 977548. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-05 17:43:23,163][00231] Avg episode reward: [(0, '0.831')] |
|
[2023-08-05 17:43:28,162][00231] Fps is (10 sec: 2866.6, 60 sec: 2389.2, 300 sec: 2374.3). Total num frames: 3923968. Throughput: 0: 607.8. Samples: 979722. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-05 17:43:28,166][00231] Avg episode reward: [(0, '0.803')] |
|
[2023-08-05 17:43:30,418][17545] Updated weights for policy 0, policy_version 960 (0.0020) |
|
[2023-08-05 17:43:33,160][00231] Fps is (10 sec: 2867.2, 60 sec: 2389.3, 300 sec: 2374.3). Total num frames: 3936256. Throughput: 0: 608.4. Samples: 984004. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:43:33,168][00231] Avg episode reward: [(0, '0.787')] |
|
[2023-08-05 17:43:38,160][00231] Fps is (10 sec: 2048.5, 60 sec: 2389.3, 300 sec: 2360.4). Total num frames: 3944448. Throughput: 0: 594.1. Samples: 986910. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-08-05 17:43:38,163][00231] Avg episode reward: [(0, '0.800')] |
|
[2023-08-05 17:43:43,160][00231] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2374.3). Total num frames: 3956736. Throughput: 0: 596.3. Samples: 988392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-05 17:43:43,162][00231] Avg episode reward: [(0, '0.836')] |
|
[2023-08-05 17:43:48,064][17545] Updated weights for policy 0, policy_version 970 (0.0031) |
|
[2023-08-05 17:43:48,164][00231] Fps is (10 sec: 2866.2, 60 sec: 2457.5, 300 sec: 2388.1). Total num frames: 3973120. Throughput: 0: 614.6. Samples: 992478. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:43:48,166][00231] Avg episode reward: [(0, '0.812')] |
|
[2023-08-05 17:43:53,161][00231] Fps is (10 sec: 2866.9, 60 sec: 2457.6, 300 sec: 2374.3). Total num frames: 3985408. Throughput: 0: 610.3. Samples: 996582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:43:53,166][00231] Avg episode reward: [(0, '0.788')] |
|
[2023-08-05 17:43:58,162][00231] Fps is (10 sec: 2048.3, 60 sec: 2457.9, 300 sec: 2360.4). Total num frames: 3993600. Throughput: 0: 593.7. Samples: 997968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-05 17:43:58,166][00231] Avg episode reward: [(0, '0.820')] |
|
[2023-08-05 17:44:02,990][17532] Stopping Batcher_0... |
|
[2023-08-05 17:44:02,992][17532] Loop batcher_evt_loop terminating... |
|
[2023-08-05 17:44:02,993][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-05 17:44:02,999][00231] Component Batcher_0 stopped! |
|
[2023-08-05 17:44:03,007][00231] Component RolloutWorker_w1 process died already! Don't wait for it. |
|
[2023-08-05 17:44:03,067][17553] Stopping RolloutWorker_w7... |
|
[2023-08-05 17:44:03,068][00231] Component RolloutWorker_w7 stopped! |
|
[2023-08-05 17:44:03,068][17553] Loop rollout_proc7_evt_loop terminating... |
|
[2023-08-05 17:44:03,088][17545] Weights refcount: 2 0 |
|
[2023-08-05 17:44:03,092][17545] Stopping InferenceWorker_p0-w0... |
|
[2023-08-05 17:44:03,096][17545] Loop inference_proc0-0_evt_loop terminating... |
|
[2023-08-05 17:44:03,092][00231] Component InferenceWorker_p0-w0 stopped! |
|
[2023-08-05 17:44:03,103][00231] Component RolloutWorker_w5 stopped! |
|
[2023-08-05 17:44:03,109][17551] Stopping RolloutWorker_w5... |
|
[2023-08-05 17:44:03,112][17551] Loop rollout_proc5_evt_loop terminating... |
|
[2023-08-05 17:44:03,117][17548] Stopping RolloutWorker_w2... |
|
[2023-08-05 17:44:03,120][17550] Stopping RolloutWorker_w4... |
|
[2023-08-05 17:44:03,116][00231] Component RolloutWorker_w3 stopped! |
|
[2023-08-05 17:44:03,123][00231] Component RolloutWorker_w2 stopped! |
|
[2023-08-05 17:44:03,126][17549] Stopping RolloutWorker_w3... |
|
[2023-08-05 17:44:03,125][00231] Component RolloutWorker_w4 stopped! |
|
[2023-08-05 17:44:03,131][17549] Loop rollout_proc3_evt_loop terminating... |
|
[2023-08-05 17:44:03,118][17548] Loop rollout_proc2_evt_loop terminating... |
|
[2023-08-05 17:44:03,121][17550] Loop rollout_proc4_evt_loop terminating... |
|
[2023-08-05 17:44:03,147][17552] Stopping RolloutWorker_w6... |
|
[2023-08-05 17:44:03,147][00231] Component RolloutWorker_w6 stopped! |
|
[2023-08-05 17:44:03,156][00231] Component RolloutWorker_w0 stopped! |
|
[2023-08-05 17:44:03,156][17546] Stopping RolloutWorker_w0... |
|
[2023-08-05 17:44:03,148][17552] Loop rollout_proc6_evt_loop terminating... |
|
[2023-08-05 17:44:03,160][17546] Loop rollout_proc0_evt_loop terminating... |
|
[2023-08-05 17:44:03,200][17532] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000879_3600384.pth |
|
[2023-08-05 17:44:03,214][17532] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-05 17:44:03,404][17532] Stopping LearnerWorker_p0... |
|
[2023-08-05 17:44:03,405][17532] Loop learner_proc0_evt_loop terminating... |
|
[2023-08-05 17:44:03,404][00231] Component LearnerWorker_p0 stopped! |
|
[2023-08-05 17:44:03,406][00231] Waiting for process learner_proc0 to stop... |
|
[2023-08-05 17:44:05,380][00231] Waiting for process inference_proc0-0 to join... |
|
[2023-08-05 17:44:05,389][00231] Waiting for process rollout_proc0 to join... |
|
[2023-08-05 17:44:06,623][00231] Waiting for process rollout_proc1 to join... |
|
[2023-08-05 17:44:06,624][00231] Waiting for process rollout_proc2 to join... |
|
[2023-08-05 17:44:06,627][00231] Waiting for process rollout_proc3 to join... |
|
[2023-08-05 17:44:06,628][00231] Waiting for process rollout_proc4 to join... |
|
[2023-08-05 17:44:06,630][00231] Waiting for process rollout_proc5 to join... |
|
[2023-08-05 17:44:06,631][00231] Waiting for process rollout_proc6 to join... |
|
[2023-08-05 17:44:06,633][00231] Waiting for process rollout_proc7 to join... |
|
[2023-08-05 17:44:06,634][00231] Batcher 0 profile tree view: |
|
batching: 29.1155, releasing_batches: 0.0239 |
|
[2023-08-05 17:44:06,635][00231] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0015 |
|
wait_policy_total: 862.3400 |
|
update_model: 10.5847 |
|
weight_update: 0.0016 |
|
one_step: 0.0028 |
|
handle_policy_step: 768.3258 |
|
deserialize: 20.1242, stack: 4.1285, obs_to_device_normalize: 149.0621, forward: 432.4213, send_messages: 30.9742 |
|
prepare_outputs: 94.9461 |
|
to_cpu: 54.3375 |
|
[2023-08-05 17:44:06,637][00231] Learner 0 profile tree view: |
|
misc: 0.0052, prepare_batch: 20.4205 |
|
train: 75.3661 |
|
epoch_init: 0.0065, minibatch_init: 0.0130, losses_postprocess: 0.5354, kl_divergence: 0.6982, after_optimizer: 5.0648 |
|
calculate_losses: 24.7673 |
|
losses_init: 0.0038, forward_head: 1.3606, bptt_initial: 17.4871, tail: 1.1171, advantages_returns: 0.2926, losses: 2.5813 |
|
bptt: 1.6112 |
|
bptt_forward_core: 1.5174 |
|
update: 43.5468 |
|
clip: 32.5899 |
|
[2023-08-05 17:44:06,638][00231] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.4777, enqueue_policy_requests: 184.0038, env_step: 1304.7572, overhead: 36.1374, complete_rollouts: 8.1690 |
|
save_policy_outputs: 32.6697 |
|
split_output_tensors: 15.9178 |
|
[2023-08-05 17:44:06,639][00231] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.5387, enqueue_policy_requests: 255.2385, env_step: 1231.3279, overhead: 35.8112, complete_rollouts: 7.6473 |
|
save_policy_outputs: 30.6840 |
|
split_output_tensors: 14.5865 |
|
[2023-08-05 17:44:06,640][00231] Loop Runner_EvtLoop terminating... |
|
[2023-08-05 17:44:06,642][00231] Runner profile tree view: |
|
main_loop: 1731.2658 |
|
[2023-08-05 17:44:06,643][00231] Collected {0: 4005888}, FPS: 2313.8 |
|
[2023-08-05 17:44:21,211][00231] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-08-05 17:44:21,212][00231] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-08-05 17:44:21,218][00231] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-08-05 17:44:21,220][00231] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-08-05 17:44:21,223][00231] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-05 17:44:21,224][00231] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-08-05 17:44:21,228][00231] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-05 17:44:21,229][00231] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-08-05 17:44:21,233][00231] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2023-08-05 17:44:21,234][00231] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2023-08-05 17:44:21,235][00231] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-08-05 17:44:21,238][00231] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-08-05 17:44:21,239][00231] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-08-05 17:44:21,246][00231] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-08-05 17:44:21,246][00231] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-08-05 17:44:21,280][00231] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-05 17:44:21,285][00231] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-05 17:44:21,287][00231] RunningMeanStd input shape: (1,) |
|
[2023-08-05 17:44:21,304][00231] ConvEncoder: input_channels=3 |
|
[2023-08-05 17:44:21,426][00231] Conv encoder output size: 512 |
|
[2023-08-05 17:44:21,428][00231] Policy head output size: 512 |
|
[2023-08-05 17:44:23,779][00231] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-05 17:44:25,066][00231] Avg episode rewards: #0: 0.830, true rewards: #0: 0.830 |
|
[2023-08-05 17:44:25,069][00231] Avg episode reward: 0.830, avg true_objective: 0.830 |
|
[2023-08-05 17:44:25,192][00231] Avg episode rewards: #0: 0.735, true rewards: #0: 0.735 |
|
[2023-08-05 17:44:25,194][00231] Avg episode reward: 0.735, avg true_objective: 0.735 |
|
[2023-08-05 17:44:25,289][00231] Avg episode rewards: #0: 0.743, true rewards: #0: 0.743 |
|
[2023-08-05 17:44:25,294][00231] Avg episode reward: 0.743, avg true_objective: 0.743 |
|
[2023-08-05 17:44:25,362][00231] Num frames 100... |
|
[2023-08-05 17:44:25,427][00231] Avg episode rewards: #0: 0.718, true rewards: #0: 0.718 |
|
[2023-08-05 17:44:25,432][00231] Avg episode reward: 0.718, avg true_objective: 0.718 |
|
[2023-08-05 17:44:25,506][00231] Avg episode rewards: #0: 0.764, true rewards: #0: 0.764 |
|
[2023-08-05 17:44:25,510][00231] Avg episode reward: 0.764, avg true_objective: 0.764 |
|
[2023-08-05 17:44:25,582][00231] Avg episode rewards: #0: 0.795, true rewards: #0: 0.795 |
|
[2023-08-05 17:44:25,586][00231] Avg episode reward: 0.795, avg true_objective: 0.795 |
|
[2023-08-05 17:44:25,718][00231] Avg episode rewards: #0: 0.773, true rewards: #0: 0.773 |
|
[2023-08-05 17:44:25,720][00231] Avg episode reward: 0.773, avg true_objective: 0.773 |
|
[2023-08-05 17:44:25,838][00231] Avg episode rewards: #0: 0.761, true rewards: #0: 0.761 |
|
[2023-08-05 17:44:25,842][00231] Avg episode reward: 0.761, avg true_objective: 0.761 |
|
[2023-08-05 17:44:25,954][00231] Avg episode rewards: #0: 0.756, true rewards: #0: 0.756 |
|
[2023-08-05 17:44:25,958][00231] Avg episode reward: 0.756, avg true_objective: 0.756 |
|
[2023-08-05 17:44:25,964][00231] Num frames 200... |
|
[2023-08-05 17:44:26,093][00231] Avg episode rewards: #0: 0.745, true rewards: #0: 0.745 |
|
[2023-08-05 17:44:26,095][00231] Avg episode reward: 0.745, avg true_objective: 0.745 |
|
[2023-08-05 17:44:28,732][00231] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-08-05 17:45:33,117][00231] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-08-05 17:45:33,118][00231] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-08-05 17:45:33,121][00231] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-08-05 17:45:33,123][00231] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-08-05 17:45:33,125][00231] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-05 17:45:33,127][00231] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-08-05 17:45:33,129][00231] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2023-08-05 17:45:33,130][00231] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-08-05 17:45:33,131][00231] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2023-08-05 17:45:33,133][00231] Adding new argument 'hf_repository'='rzambrano/rl_course_vizdoom_basic' that is not in the saved config file! |
|
[2023-08-05 17:45:33,134][00231] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-08-05 17:45:33,135][00231] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-08-05 17:45:33,136][00231] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-08-05 17:45:33,137][00231] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-08-05 17:45:33,138][00231] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-08-05 17:45:33,183][00231] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-05 17:45:33,187][00231] RunningMeanStd input shape: (1,) |
|
[2023-08-05 17:45:33,202][00231] ConvEncoder: input_channels=3 |
|
[2023-08-05 17:45:33,243][00231] Conv encoder output size: 512 |
|
[2023-08-05 17:45:33,245][00231] Policy head output size: 512 |
|
[2023-08-05 17:45:33,264][00231] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-05 17:45:33,642][00231] Avg episode rewards: #0: 0.910, true rewards: #0: 0.910 |
|
[2023-08-05 17:45:33,644][00231] Avg episode reward: 0.910, avg true_objective: 0.910 |
|
[2023-08-05 17:45:33,735][00231] Avg episode rewards: #0: 0.810, true rewards: #0: 0.810 |
|
[2023-08-05 17:45:33,736][00231] Avg episode reward: 0.810, avg true_objective: 0.810 |
|
[2023-08-05 17:45:33,830][00231] Avg episode rewards: #0: 0.770, true rewards: #0: 0.770 |
|
[2023-08-05 17:45:33,832][00231] Avg episode reward: 0.770, avg true_objective: 0.770 |
|
[2023-08-05 17:45:33,928][00231] Avg episode rewards: #0: 0.748, true rewards: #0: 0.748 |
|
[2023-08-05 17:45:33,930][00231] Avg episode reward: 0.748, avg true_objective: 0.748 |
|
[2023-08-05 17:45:33,948][00231] Num frames 100... |
|
[2023-08-05 17:45:34,017][00231] Avg episode rewards: #0: 0.750, true rewards: #0: 0.750 |
|
[2023-08-05 17:45:34,019][00231] Avg episode reward: 0.750, avg true_objective: 0.750 |
|
[2023-08-05 17:45:34,120][00231] Avg episode rewards: #0: 0.732, true rewards: #0: 0.732 |
|
[2023-08-05 17:45:34,121][00231] Avg episode reward: 0.732, avg true_objective: 0.732 |
|
[2023-08-05 17:45:34,215][00231] Avg episode rewards: #0: 0.724, true rewards: #0: 0.724 |
|
[2023-08-05 17:45:34,218][00231] Avg episode reward: 0.724, avg true_objective: 0.724 |
|
[2023-08-05 17:45:34,305][00231] Avg episode rewards: #0: 0.729, true rewards: #0: 0.729 |
|
[2023-08-05 17:45:34,307][00231] Avg episode reward: 0.729, avg true_objective: 0.729 |
|
[2023-08-05 17:45:34,373][00231] Avg episode rewards: #0: 0.753, true rewards: #0: 0.753 |
|
[2023-08-05 17:45:34,374][00231] Avg episode reward: 0.753, avg true_objective: 0.753 |
|
[2023-08-05 17:45:34,384][00231] Num frames 200... |
|
[2023-08-05 17:45:34,468][00231] Avg episode rewards: #0: 0.753, true rewards: #0: 0.753 |
|
[2023-08-05 17:45:34,469][00231] Avg episode reward: 0.753, avg true_objective: 0.753 |
|
[2023-08-05 17:45:35,833][00231] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|