micdestefano commited on
Commit
bf0bcc1
·
1 Parent(s): e87beef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -42
README.md CHANGED
@@ -22,48 +22,49 @@ model-index:
22
  verified: false
23
  ---
24
 
25
- # PPO Agent Playing LunarLander-v2
26
 
27
- This is a trained model of a PPO agent playing LunarLander-v2.
28
- The agent has been trained with a custom PPO implementation inspired to
29
- [a tutorial by Costa Huang](https://www.youtube.com/watch?v=MEt6rrxH8W4).
30
 
31
- This work is related to Unit 8, part 1 of the Hugging Face Deep RL course. I had to slightly modify
32
- some pieces of the provided notebook, because I used gymnasium and not gym.
33
- Furthermore, the PPO implementation is available on GitHub, here:
34
- [https://github.com/micdestefano/micppo](https://github.com/micdestefano/micppo).
35
 
36
- # Hyperparameters
37
- ```python
38
- {'exp_name': 'micppo'
39
- 'gym_id': 'LunarLander-v2'
40
- 'learning_rate': 0.00025
41
- 'min_learning_rate_ratio': 0.01
42
- 'seed': 1
43
- 'total_timesteps': 10000000
44
- 'torch_not_deterministic': False
45
- 'no_cuda': False
46
- 'capture_video': True
47
- 'hidden_size': 256
48
- 'num_hidden_layers': 3
49
- 'activation': 'leaky-relu'
50
- 'num_checkpoints': 4
51
- 'num_envs': 8
52
- 'num_steps': 2048
53
- 'no_lr_annealing': False
54
- 'no_gae': False
55
- 'gamma': 0.99
56
- 'gae_lambda': 0.95
57
- 'num_minibatches': 16
58
- 'num_update_epochs': 32
59
- 'no_advantage_normalization': False
60
- 'clip_coef': 0.2
61
- 'no_value_loss_clip': False
62
- 'ent_coef': 0.01
63
- 'vf_coef': 0.5
64
- 'max_grad_norm': 0.5
65
- 'target_kl': None
66
- 'batch_size': 16384
67
- 'minibatch_size': 1024}
68
- ```
69
-
 
 
22
  verified: false
23
  ---
24
 
25
+ # PPO Agent Playing LunarLander-v2
26
 
27
+ This is a trained model of a PPO agent playing LunarLander-v2.
28
+ The agent has been trained with a custom PPO implementation inspired to
29
+ [a tutorial by Costa Huang](https://www.youtube.com/watch?v=MEt6rrxH8W4).
30
 
31
+ This work is related to Unit 8, part 1 of the Hugging Face Deep RL course. I had to slightly modify
32
+ some pieces of the provided notebook, because I used gymnasium and not gym.
33
+ Furthermore, the PPO implementation is available on GitHub, here:
34
+ [https://github.com/micdestefano/micppo](https://github.com/micdestefano/micppo).
35
 
36
+ # Hyperparameters
37
+ ```python
38
+ {
39
+ 'exp_name': 'micppo'
40
+ 'gym_id': 'LunarLander-v2'
41
+ 'learning_rate': 0.00025
42
+ 'min_learning_rate_ratio': 0.01
43
+ 'seed': 1
44
+ 'total_timesteps': 10000000
45
+ 'torch_not_deterministic': False
46
+ 'no_cuda': False
47
+ 'capture_video': True
48
+ 'hidden_size': 256
49
+ 'num_hidden_layers': 3
50
+ 'activation': 'leaky-relu'
51
+ 'num_checkpoints': 4
52
+ 'num_envs': 8
53
+ 'num_steps': 2048
54
+ 'no_lr_annealing': False
55
+ 'no_gae': False
56
+ 'gamma': 0.99
57
+ 'gae_lambda': 0.95
58
+ 'num_minibatches': 16
59
+ 'num_update_epochs': 32
60
+ 'no_advantage_normalization': False
61
+ 'clip_coef': 0.2
62
+ 'no_value_loss_clip': False
63
+ 'ent_coef': 0.01
64
+ 'vf_coef': 0.5
65
+ 'max_grad_norm': 0.5
66
+ 'target_kl': None
67
+ 'batch_size': 16384
68
+ 'minibatch_size': 1024
69
+ }
70
+ ```