anggaarash
commited on
Commit
โข
5034da1
1
Parent(s):
7c3af56
Update README.md
Browse files
README.md
CHANGED
@@ -1,35 +1,36 @@
|
|
1 |
-
---
|
2 |
-
library_name: ml-agents
|
3 |
-
tags:
|
4 |
-
- SoccerTwos
|
5 |
-
-
|
6 |
-
- reinforcement-learning
|
7 |
-
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
35 |
|
|
|
1 |
+
---
|
2 |
+
library_name: ml-agents
|
3 |
+
tags:
|
4 |
+
- ML-Agents-SoccerTwos
|
5 |
+
- SoccerTwos
|
6 |
+
- deep-reinforcement-learning
|
7 |
+
- reinforcement-learning
|
8 |
+
- ML-Agents-SoccerTwos
|
9 |
+
---
|
10 |
+
|
11 |
+
# **poca** Agent playing **SoccerTwos**
|
12 |
+
This is a trained model of a **poca** agent playing **SoccerTwos**
|
13 |
+
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
|
14 |
+
|
15 |
+
## Usage (with ML-Agents)
|
16 |
+
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
|
17 |
+
|
18 |
+
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
|
19 |
+
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
|
20 |
+
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
|
21 |
+
- A *longer tutorial* to understand how works ML-Agents:
|
22 |
+
https://huggingface.co/learn/deep-rl-course/unit5/introduction
|
23 |
+
|
24 |
+
### Resume the training
|
25 |
+
```bash
|
26 |
+
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
|
27 |
+
```
|
28 |
+
|
29 |
+
### Watch your Agent play
|
30 |
+
You can watch your agent **playing directly in your browser**
|
31 |
+
|
32 |
+
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
|
33 |
+
2. Step 1: Find your model_id: anggaarash/poca-SoccerTwos
|
34 |
+
3. Step 2: Select your *.nn /*.onnx file
|
35 |
+
4. Click on Watch the agent play ๐
|
36 |
|