ppo-LunarLander-v2 / results.json
SeanPoole's picture
Upload PPO LunarLander-v2 Trained Model
b7ab946
raw
history blame contribute delete
164 Bytes
{"mean_reward": 261.50280280000004, "std_reward": 18.53201592257406, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2023-09-04T09:14:15.832513"}