Initial commit
Browse files- README.md +86 -0
- dqn-SpaceInvadersNoFrameskip-v4.zip +1 -1
- dqn-SpaceInvadersNoFrameskip-v4/data +8 -8
- replay.mp4 +3 -0
- results.json +1 -1
README.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: stable-baselines3
|
3 |
+
tags:
|
4 |
+
- SpaceInvadersNoFrameskip-v4
|
5 |
+
- deep-reinforcement-learning
|
6 |
+
- reinforcement-learning
|
7 |
+
- stable-baselines3
|
8 |
+
model-index:
|
9 |
+
- name: DQN
|
10 |
+
results:
|
11 |
+
- task:
|
12 |
+
type: reinforcement-learning
|
13 |
+
name: reinforcement-learning
|
14 |
+
dataset:
|
15 |
+
name: SpaceInvadersNoFrameskip-v4
|
16 |
+
type: SpaceInvadersNoFrameskip-v4
|
17 |
+
metrics:
|
18 |
+
- type: mean_reward
|
19 |
+
value: 551.50 +/- 216.45
|
20 |
+
name: mean_reward
|
21 |
+
verified: false
|
22 |
+
---
|
23 |
+
|
24 |
+
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
|
25 |
+
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
|
26 |
+
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
|
27 |
+
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
|
28 |
+
|
29 |
+
The RL Zoo is a training framework for Stable Baselines3
|
30 |
+
reinforcement learning agents,
|
31 |
+
with hyperparameter optimization and pre-trained agents included.
|
32 |
+
|
33 |
+
## Usage (with SB3 RL Zoo)
|
34 |
+
|
35 |
+
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
|
36 |
+
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
|
37 |
+
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
|
38 |
+
|
39 |
+
Install the RL Zoo (with SB3 and SB3-Contrib):
|
40 |
+
```bash
|
41 |
+
pip install rl_zoo3
|
42 |
+
```
|
43 |
+
|
44 |
+
```
|
45 |
+
# Download model and save it into the logs/ folder
|
46 |
+
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jaredoong -f logs/
|
47 |
+
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
|
48 |
+
```
|
49 |
+
|
50 |
+
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
|
51 |
+
```
|
52 |
+
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jaredoong -f logs/
|
53 |
+
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
|
54 |
+
```
|
55 |
+
|
56 |
+
## Training (with the RL Zoo)
|
57 |
+
```
|
58 |
+
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
|
59 |
+
# Upload the model and generate video (when possible)
|
60 |
+
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jaredoong
|
61 |
+
```
|
62 |
+
|
63 |
+
## Hyperparameters
|
64 |
+
```python
|
65 |
+
OrderedDict([('batch_size', 32),
|
66 |
+
('buffer_size', 100000),
|
67 |
+
('env_wrapper',
|
68 |
+
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
|
69 |
+
('exploration_final_eps', 0.01),
|
70 |
+
('exploration_fraction', 0.1),
|
71 |
+
('frame_stack', 4),
|
72 |
+
('gradient_steps', 1),
|
73 |
+
('learning_rate', 0.0001),
|
74 |
+
('learning_starts', 100000),
|
75 |
+
('n_timesteps', 1000000.0),
|
76 |
+
('optimize_memory_usage', False),
|
77 |
+
('policy', 'CnnPolicy'),
|
78 |
+
('target_update_interval', 1000),
|
79 |
+
('train_freq', 4),
|
80 |
+
('normalize', False)])
|
81 |
+
```
|
82 |
+
|
83 |
+
# Environment Arguments
|
84 |
+
```python
|
85 |
+
{'render_mode': 'rgb_array'}
|
86 |
+
```
|
dqn-SpaceInvadersNoFrameskip-v4.zip
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 27217886
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:859428dd7bd252d70ff985be72b2ae9dacb9eb9a07259cfeef51373fa9d568d2
|
3 |
size 27217886
|
dqn-SpaceInvadersNoFrameskip-v4/data
CHANGED
@@ -4,9 +4,9 @@
|
|
4 |
":serialized:": "gAWVMAAAAAAAAACMHnN0YWJsZV9iYXNlbGluZXMzLmRxbi5wb2xpY2llc5SMCUNublBvbGljeZSTlC4=",
|
5 |
"__module__": "stable_baselines3.dqn.policies",
|
6 |
"__doc__": "\n Policy class for DQN when using images as input.\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param features_extractor_class: Features extractor to use.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ",
|
7 |
-
"__init__": "<function CnnPolicy.__init__ at
|
8 |
"__abstractmethods__": "frozenset()",
|
9 |
-
"_abc_impl": "<_abc._abc_data object at
|
10 |
},
|
11 |
"verbose": 1,
|
12 |
"policy_kwargs": {},
|
@@ -83,13 +83,13 @@
|
|
83 |
":serialized:": "gAWVNQAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwMUmVwbGF5QnVmZmVylJOULg==",
|
84 |
"__module__": "stable_baselines3.common.buffers",
|
85 |
"__doc__": "\n Replay buffer used in off-policy algorithms like SAC/TD3.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param n_envs: Number of parallel environments\n :param optimize_memory_usage: Enable a memory efficient variant\n of the replay buffer which reduces by almost a factor two the memory used,\n at a cost of more complexity.\n See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195\n and https://github.com/DLR-RM/stable-baselines3/pull/28#issuecomment-637559274\n Cannot be used in combination with handle_timeout_termination.\n :param handle_timeout_termination: Handle timeout termination (due to timelimit)\n separately and treat the task as infinite horizon task.\n https://github.com/DLR-RM/stable-baselines3/issues/284\n ",
|
86 |
-
"__init__": "<function ReplayBuffer.__init__ at
|
87 |
-
"add": "<function ReplayBuffer.add at
|
88 |
-
"sample": "<function ReplayBuffer.sample at
|
89 |
-
"_get_samples": "<function ReplayBuffer._get_samples at
|
90 |
-
"_maybe_cast_dtype": "<staticmethod(<function ReplayBuffer._maybe_cast_dtype at
|
91 |
"__abstractmethods__": "frozenset()",
|
92 |
-
"_abc_impl": "<_abc._abc_data object at
|
93 |
},
|
94 |
"replay_buffer_kwargs": {},
|
95 |
"train_freq": {
|
|
|
4 |
":serialized:": "gAWVMAAAAAAAAACMHnN0YWJsZV9iYXNlbGluZXMzLmRxbi5wb2xpY2llc5SMCUNublBvbGljeZSTlC4=",
|
5 |
"__module__": "stable_baselines3.dqn.policies",
|
6 |
"__doc__": "\n Policy class for DQN when using images as input.\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param features_extractor_class: Features extractor to use.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ",
|
7 |
+
"__init__": "<function CnnPolicy.__init__ at 0x0000018F265655A0>",
|
8 |
"__abstractmethods__": "frozenset()",
|
9 |
+
"_abc_impl": "<_abc._abc_data object at 0x0000018F26568980>"
|
10 |
},
|
11 |
"verbose": 1,
|
12 |
"policy_kwargs": {},
|
|
|
83 |
":serialized:": "gAWVNQAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwMUmVwbGF5QnVmZmVylJOULg==",
|
84 |
"__module__": "stable_baselines3.common.buffers",
|
85 |
"__doc__": "\n Replay buffer used in off-policy algorithms like SAC/TD3.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param n_envs: Number of parallel environments\n :param optimize_memory_usage: Enable a memory efficient variant\n of the replay buffer which reduces by almost a factor two the memory used,\n at a cost of more complexity.\n See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195\n and https://github.com/DLR-RM/stable-baselines3/pull/28#issuecomment-637559274\n Cannot be used in combination with handle_timeout_termination.\n :param handle_timeout_termination: Handle timeout termination (due to timelimit)\n separately and treat the task as infinite horizon task.\n https://github.com/DLR-RM/stable-baselines3/issues/284\n ",
|
86 |
+
"__init__": "<function ReplayBuffer.__init__ at 0x0000018F26545900>",
|
87 |
+
"add": "<function ReplayBuffer.add at 0x0000018F26545990>",
|
88 |
+
"sample": "<function ReplayBuffer.sample at 0x0000018F26545A20>",
|
89 |
+
"_get_samples": "<function ReplayBuffer._get_samples at 0x0000018F26545AB0>",
|
90 |
+
"_maybe_cast_dtype": "<staticmethod(<function ReplayBuffer._maybe_cast_dtype at 0x0000018F26545B40>)>",
|
91 |
"__abstractmethods__": "frozenset()",
|
92 |
+
"_abc_impl": "<_abc._abc_data object at 0x0000018F264CFF40>"
|
93 |
},
|
94 |
"replay_buffer_kwargs": {},
|
95 |
"train_freq": {
|
replay.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22c1e7a2795f9e7fc1421f5e420b94f9a819962076a415029eec2c9998d9b07d
|
3 |
+
size 219742
|
results.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"mean_reward": 551.5, "std_reward": 216.44918572265408, "is_deterministic": false, "n_eval_episodes": 10, "eval_datetime": "2023-08-27T11:
|
|
|
1 |
+
{"mean_reward": 551.5, "std_reward": 216.44918572265408, "is_deterministic": false, "n_eval_episodes": 10, "eval_datetime": "2023-08-27T11:55:49.744281"}
|