Ean Yang commited on
Commit
69a5bd9
1 Parent(s): c35942a

初始化部署

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitignore +166 -0
  2. .pre-commit-config.yaml +86 -0
  3. CONTRIBUTING.md +96 -0
  4. LICENSE +661 -0
  5. app.py +161 -0
  6. docker/Dockerfile +85 -0
  7. docker/Dockerfile-arm64 +51 -0
  8. docker/Dockerfile-conda +40 -0
  9. docker/Dockerfile-cpu +57 -0
  10. docker/Dockerfile-jetson +50 -0
  11. docker/Dockerfile-python +54 -0
  12. docker/Dockerfile-runner +38 -0
  13. docs/README.md +140 -0
  14. docs/build_docs.py +141 -0
  15. docs/build_reference.py +130 -0
  16. docs/coming_soon_template.md +34 -0
  17. docs/en/CNAME +1 -0
  18. docs/en/guides/azureml-quickstart.md +152 -0
  19. docs/en/guides/conda-quickstart.md +132 -0
  20. docs/en/guides/coral-edge-tpu-on-raspberry-pi.md +140 -0
  21. docs/en/guides/distance-calculation.md +107 -0
  22. docs/en/guides/docker-quickstart.md +119 -0
  23. docs/en/guides/heatmaps.md +301 -0
  24. docs/en/guides/hyperparameter-tuning.md +206 -0
  25. docs/en/guides/index.md +65 -0
  26. docs/en/guides/instance-segmentation-and-tracking.md +140 -0
  27. docs/en/guides/isolating-segmentation-objects.md +325 -0
  28. docs/en/guides/kfold-cross-validation.md +278 -0
  29. docs/en/guides/model-deployment-options.md +305 -0
  30. docs/en/guides/object-blurring.md +91 -0
  31. docs/en/guides/object-counting.md +246 -0
  32. docs/en/guides/object-cropping.md +102 -0
  33. docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md +69 -0
  34. docs/en/guides/raspberry-pi.md +196 -0
  35. docs/en/guides/region-counting.md +86 -0
  36. docs/en/guides/sahi-tiled-inference.md +185 -0
  37. docs/en/guides/security-alarm-system.md +166 -0
  38. docs/en/guides/speed-estimation.md +110 -0
  39. docs/en/guides/triton-inference-server.md +137 -0
  40. docs/en/guides/view-results-in-terminal.md +146 -0
  41. docs/en/guides/vision-eye.md +177 -0
  42. docs/en/guides/workouts-monitoring.md +148 -0
  43. docs/en/guides/yolo-common-issues.md +276 -0
  44. docs/en/guides/yolo-performance-metrics.md +176 -0
  45. docs/en/guides/yolo-thread-safe-inference.md +108 -0
  46. docs/en/help/CI.md +61 -0
  47. docs/en/help/CLA.md +28 -0
  48. docs/en/help/FAQ.md +39 -0
  49. docs/en/help/code_of_conduct.md +85 -0
  50. docs/en/help/contributing.md +131 -0
.gitignore ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ pip-wheel-metadata/
24
+ share/python-wheels/
25
+ *.egg-info/
26
+ .installed.cfg
27
+ *.egg
28
+ MANIFEST
29
+
30
+ # PyInstaller
31
+ # Usually these files are written by a python script from a template
32
+ # before PyInstaller builds the exe, so as to inject date/other info into it.
33
+ *.manifest
34
+ *.spec
35
+
36
+ # Installer logs
37
+ pip-log.txt
38
+ pip-delete-this-directory.txt
39
+
40
+ # Unit test / coverage reports
41
+ htmlcov/
42
+ .tox/
43
+ .nox/
44
+ .coverage
45
+ .coverage.*
46
+ .cache
47
+ nosetests.xml
48
+ coverage.xml
49
+ *.cover
50
+ *.py,cover
51
+ .hypothesis/
52
+ .pytest_cache/
53
+ mlruns/
54
+
55
+ # Translations
56
+ *.mo
57
+ *.pot
58
+
59
+ # Django stuff:
60
+ *.log
61
+ local_settings.py
62
+ db.sqlite3
63
+ db.sqlite3-journal
64
+
65
+ # Flask stuff:
66
+ instance/
67
+ .webassets-cache
68
+
69
+ # Scrapy stuff:
70
+ .scrapy
71
+
72
+ # Sphinx documentation
73
+ docs/_build/
74
+
75
+ # PyBuilder
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # Profiling
86
+ *.pclprof
87
+
88
+ # pyenv
89
+ .python-version
90
+
91
+ # pipenv
92
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
93
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
94
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
95
+ # install all needed dependencies.
96
+ #Pipfile.lock
97
+
98
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
99
+ __pypackages__/
100
+
101
+ # Celery stuff
102
+ celerybeat-schedule
103
+ celerybeat.pid
104
+
105
+ # SageMath parsed files
106
+ *.sage.py
107
+
108
+ # Environments
109
+ .env
110
+ .venv
111
+ .idea
112
+ env/
113
+ venv/
114
+ ENV/
115
+ env.bak/
116
+ venv.bak/
117
+
118
+ # Spyder project settings
119
+ .spyderproject
120
+ .spyproject
121
+
122
+ # VSCode project settings
123
+ .vscode/
124
+
125
+ # Rope project settings
126
+ .ropeproject
127
+
128
+ # mkdocs documentation
129
+ /site
130
+ mkdocs_github_authors.yaml
131
+
132
+ # mypy
133
+ .mypy_cache/
134
+ .dmypy.json
135
+ dmypy.json
136
+
137
+ # Pyre type checker
138
+ .pyre/
139
+
140
+ # datasets and projects
141
+ datasets/
142
+ runs/
143
+ wandb/
144
+ tests/
145
+ .DS_Store
146
+
147
+ # Neural Network weights -----------------------------------------------------------------------------------------------
148
+ weights/
149
+ *.weights
150
+ *.pt
151
+ *.pb
152
+ *.onnx
153
+ *.engine
154
+ *.mlmodel
155
+ *.mlpackage
156
+ *.torchscript
157
+ *.tflite
158
+ *.h5
159
+ *_saved_model/
160
+ *_web_model/
161
+ *_openvino_model/
162
+ *_paddle_model/
163
+ pnnx*
164
+
165
+ # Autogenerated files for tests
166
+ /ultralytics/assets/
.pre-commit-config.yaml ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Pre-commit hooks. For more information see https://github.com/pre-commit/pre-commit-hooks/blob/main/README.md
3
+ # Optionally remove from local hooks with 'rm .git/hooks/pre-commit'
4
+
5
+ # Define bot property if installed via https://github.com/marketplace/pre-commit-ci
6
+ ci:
7
+ autofix_prs: true
8
+ autoupdate_commit_msg: "[pre-commit.ci] pre-commit suggestions"
9
+ autoupdate_schedule: monthly
10
+ submodules: true
11
+
12
+ # Exclude directories (optional)
13
+ # exclude: 'docs/'
14
+
15
+ # Define repos to run
16
+ repos:
17
+ - repo: https://github.com/pre-commit/pre-commit-hooks
18
+ rev: v4.5.0
19
+ hooks:
20
+ - id: end-of-file-fixer
21
+ - id: trailing-whitespace
22
+ - id: check-case-conflict
23
+ # - id: check-yaml
24
+ - id: check-docstring-first
25
+ - id: detect-private-key
26
+
27
+ - repo: https://github.com/asottile/pyupgrade
28
+ rev: v3.15.0
29
+ hooks:
30
+ - id: pyupgrade
31
+ name: Upgrade code
32
+
33
+ - repo: https://github.com/astral-sh/ruff-pre-commit
34
+ rev: v0.1.11
35
+ hooks:
36
+ - id: ruff
37
+ args: [--fix]
38
+
39
+ - repo: https://github.com/executablebooks/mdformat
40
+ rev: 0.7.17
41
+ hooks:
42
+ - id: mdformat
43
+ name: MD formatting
44
+ additional_dependencies:
45
+ - mdformat-gfm
46
+ - mdformat-frontmatter
47
+ - mdformat-mkdocs
48
+ args:
49
+ - --wrap=no
50
+ - --number
51
+ exclude: 'docs/.*\.md'
52
+ # exclude: "README.md|README.zh-CN.md|CONTRIBUTING.md"
53
+
54
+ - repo: https://github.com/codespell-project/codespell
55
+ rev: v2.2.6
56
+ hooks:
57
+ - id: codespell
58
+ exclude: "docs/de|docs/fr|docs/pt|docs/es|docs/mkdocs_de.yml"
59
+ args:
60
+ - --ignore-words-list=crate,nd,ned,strack,dota,ane,segway,fo,gool,winn,commend,bloc,nam,afterall
61
+
62
+ - repo: https://github.com/hadialqattan/pycln
63
+ rev: v2.4.0
64
+ hooks:
65
+ - id: pycln
66
+ args: [--all]
67
+ #
68
+ # - repo: https://github.com/PyCQA/docformatter
69
+ # rev: v1.7.5
70
+ # hooks:
71
+ # - id: docformatter
72
+
73
+ # - repo: https://github.com/asottile/yesqa
74
+ # rev: v1.4.0
75
+ # hooks:
76
+ # - id: yesqa
77
+
78
+ # - repo: https://github.com/asottile/dead
79
+ # rev: v1.5.0
80
+ # hooks:
81
+ # - id: dead
82
+
83
+ # - repo: https://github.com/ultralytics/pre-commit
84
+ # rev: bd60a414f80a53fb8f593d3bfed4701fc47e4b23
85
+ # hooks:
86
+ # - id: capitalize-comments
CONTRIBUTING.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to YOLOv8 🚀
2
+
3
+ We love your input! We want to make contributing to YOLOv8 as easy and transparent as possible, whether it's:
4
+
5
+ - Reporting a bug
6
+ - Discussing the current state of the code
7
+ - Submitting a fix
8
+ - Proposing a new feature
9
+ - Becoming a maintainer
10
+
11
+ YOLOv8 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃!
12
+
13
+ ## Submitting a Pull Request (PR) 🛠️
14
+
15
+ Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
16
+
17
+ ### 1. Select File to Update
18
+
19
+ Select `requirements.txt` to update by clicking on it in GitHub.
20
+
21
+ <p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
22
+
23
+ ### 2. Click 'Edit this file'
24
+
25
+ Button is in top-right corner.
26
+
27
+ <p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
28
+
29
+ ### 3. Make Changes
30
+
31
+ Change `matplotlib` version from `3.2.2` to `3.3`.
32
+
33
+ <p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
34
+
35
+ ### 4. Preview Changes and Submit PR
36
+
37
+ Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose changes** button. All done, your PR is now submitted to YOLOv8 for review and approval 😃!
38
+
39
+ <p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
40
+
41
+ ### PR recommendations
42
+
43
+ To allow your work to be integrated as seamlessly as possible, we advise you to:
44
+
45
+ - ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge main` locally.
46
+
47
+ <p align="center"><img width="751" alt="PR recommendation 1" src="https://user-images.githubusercontent.com/26833433/187295893-50ed9f44-b2c9-4138-a614-de69bd1753d7.png"></p>
48
+
49
+ - ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
50
+
51
+ <p align="center"><img width="751" alt="PR recommendation 2" src="https://user-images.githubusercontent.com/26833433/187296922-545c5498-f64a-4d8c-8300-5fa764360da6.png"></p>
52
+
53
+ - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
54
+
55
+ ### Docstrings
56
+
57
+ Not all functions or classes require docstrings but when they do, we follow [google-style docstrings format](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings). Here is an example:
58
+
59
+ ```python
60
+ """
61
+ What the function does. Performs NMS on given detection predictions.
62
+
63
+ Args:
64
+ arg1: The description of the 1st argument
65
+ arg2: The description of the 2nd argument
66
+
67
+ Returns:
68
+ What the function returns. Empty if nothing is returned.
69
+
70
+ Raises:
71
+ Exception Class: When and why this exception can be raised by the function.
72
+ """
73
+ ```
74
+
75
+ ## Submitting a Bug Report 🐛
76
+
77
+ If you spot a problem with YOLOv8 please submit a Bug Report!
78
+
79
+ For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started.
80
+
81
+ When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). Your code that reproduces the problem should be:
82
+
83
+ - ✅ **Minimal** – Use as little code as possible that still produces the same problem
84
+ - ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
85
+ - ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
86
+
87
+ In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be:
88
+
89
+ - ✅ **Current** – Verify that your code is up-to-date with current GitHub [main](https://github.com/ultralytics/ultralytics/tree/main) branch, and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits.
90
+ - ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
91
+
92
+ If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/ultralytics/issues/new/choose) and providing a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us better understand and diagnose your problem.
93
+
94
+ ## License
95
+
96
+ By contributing, you agree that your contributions will be licensed under the [AGPL-3.0 license](https://choosealicense.com/licenses/agpl-3.0/)
LICENSE ADDED
@@ -0,0 +1,661 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU AFFERO GENERAL PUBLIC LICENSE
2
+ Version 3, 19 November 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU Affero General Public License is a free, copyleft license for
11
+ software and other kinds of works, specifically designed to ensure
12
+ cooperation with the community in the case of network server software.
13
+
14
+ The licenses for most software and other practical works are designed
15
+ to take away your freedom to share and change the works. By contrast,
16
+ our General Public Licenses are intended to guarantee your freedom to
17
+ share and change all versions of a program--to make sure it remains free
18
+ software for all its users.
19
+
20
+ When we speak of free software, we are referring to freedom, not
21
+ price. Our General Public Licenses are designed to make sure that you
22
+ have the freedom to distribute copies of free software (and charge for
23
+ them if you wish), that you receive source code or can get it if you
24
+ want it, that you can change the software or use pieces of it in new
25
+ free programs, and that you know you can do these things.
26
+
27
+ Developers that use our General Public Licenses protect your rights
28
+ with two steps: (1) assert copyright on the software, and (2) offer
29
+ you this License which gives you legal permission to copy, distribute
30
+ and/or modify the software.
31
+
32
+ A secondary benefit of defending all users' freedom is that
33
+ improvements made in alternate versions of the program, if they
34
+ receive widespread use, become available for other developers to
35
+ incorporate. Many developers of free software are heartened and
36
+ encouraged by the resulting cooperation. However, in the case of
37
+ software used on network servers, this result may fail to come about.
38
+ The GNU General Public License permits making a modified version and
39
+ letting the public access it on a server without ever releasing its
40
+ source code to the public.
41
+
42
+ The GNU Affero General Public License is designed specifically to
43
+ ensure that, in such cases, the modified source code becomes available
44
+ to the community. It requires the operator of a network server to
45
+ provide the source code of the modified version running there to the
46
+ users of that server. Therefore, public use of a modified version, on
47
+ a publicly accessible server, gives the public access to the source
48
+ code of the modified version.
49
+
50
+ An older license, called the Affero General Public License and
51
+ published by Affero, was designed to accomplish similar goals. This is
52
+ a different license, not a version of the Affero GPL, but Affero has
53
+ released a new version of the Affero GPL which permits relicensing under
54
+ this license.
55
+
56
+ The precise terms and conditions for copying, distribution and
57
+ modification follow.
58
+
59
+ TERMS AND CONDITIONS
60
+
61
+ 0. Definitions.
62
+
63
+ "This License" refers to version 3 of the GNU Affero General Public License.
64
+
65
+ "Copyright" also means copyright-like laws that apply to other kinds of
66
+ works, such as semiconductor masks.
67
+
68
+ "The Program" refers to any copyrightable work licensed under this
69
+ License. Each licensee is addressed as "you". "Licensees" and
70
+ "recipients" may be individuals or organizations.
71
+
72
+ To "modify" a work means to copy from or adapt all or part of the work
73
+ in a fashion requiring copyright permission, other than the making of an
74
+ exact copy. The resulting work is called a "modified version" of the
75
+ earlier work or a work "based on" the earlier work.
76
+
77
+ A "covered work" means either the unmodified Program or a work based
78
+ on the Program.
79
+
80
+ To "propagate" a work means to do anything with it that, without
81
+ permission, would make you directly or secondarily liable for
82
+ infringement under applicable copyright law, except executing it on a
83
+ computer or modifying a private copy. Propagation includes copying,
84
+ distribution (with or without modification), making available to the
85
+ public, and in some countries other activities as well.
86
+
87
+ To "convey" a work means any kind of propagation that enables other
88
+ parties to make or receive copies. Mere interaction with a user through
89
+ a computer network, with no transfer of a copy, is not conveying.
90
+
91
+ An interactive user interface displays "Appropriate Legal Notices"
92
+ to the extent that it includes a convenient and prominently visible
93
+ feature that (1) displays an appropriate copyright notice, and (2)
94
+ tells the user that there is no warranty for the work (except to the
95
+ extent that warranties are provided), that licensees may convey the
96
+ work under this License, and how to view a copy of this License. If
97
+ the interface presents a list of user commands or options, such as a
98
+ menu, a prominent item in the list meets this criterion.
99
+
100
+ 1. Source Code.
101
+
102
+ The "source code" for a work means the preferred form of the work
103
+ for making modifications to it. "Object code" means any non-source
104
+ form of a work.
105
+
106
+ A "Standard Interface" means an interface that either is an official
107
+ standard defined by a recognized standards body, or, in the case of
108
+ interfaces specified for a particular programming language, one that
109
+ is widely used among developers working in that language.
110
+
111
+ The "System Libraries" of an executable work include anything, other
112
+ than the work as a whole, that (a) is included in the normal form of
113
+ packaging a Major Component, but which is not part of that Major
114
+ Component, and (b) serves only to enable use of the work with that
115
+ Major Component, or to implement a Standard Interface for which an
116
+ implementation is available to the public in source code form. A
117
+ "Major Component", in this context, means a major essential component
118
+ (kernel, window system, and so on) of the specific operating system
119
+ (if any) on which the executable work runs, or a compiler used to
120
+ produce the work, or an object code interpreter used to run it.
121
+
122
+ The "Corresponding Source" for a work in object code form means all
123
+ the source code needed to generate, install, and (for an executable
124
+ work) run the object code and to modify the work, including scripts to
125
+ control those activities. However, it does not include the work's
126
+ System Libraries, or general-purpose tools or generally available free
127
+ programs which are used unmodified in performing those activities but
128
+ which are not part of the work. For example, Corresponding Source
129
+ includes interface definition files associated with source files for
130
+ the work, and the source code for shared libraries and dynamically
131
+ linked subprograms that the work is specifically designed to require,
132
+ such as by intimate data communication or control flow between those
133
+ subprograms and other parts of the work.
134
+
135
+ The Corresponding Source need not include anything that users
136
+ can regenerate automatically from other parts of the Corresponding
137
+ Source.
138
+
139
+ The Corresponding Source for a work in source code form is that
140
+ same work.
141
+
142
+ 2. Basic Permissions.
143
+
144
+ All rights granted under this License are granted for the term of
145
+ copyright on the Program, and are irrevocable provided the stated
146
+ conditions are met. This License explicitly affirms your unlimited
147
+ permission to run the unmodified Program. The output from running a
148
+ covered work is covered by this License only if the output, given its
149
+ content, constitutes a covered work. This License acknowledges your
150
+ rights of fair use or other equivalent, as provided by copyright law.
151
+
152
+ You may make, run and propagate covered works that you do not
153
+ convey, without conditions so long as your license otherwise remains
154
+ in force. You may convey covered works to others for the sole purpose
155
+ of having them make modifications exclusively for you, or provide you
156
+ with facilities for running those works, provided that you comply with
157
+ the terms of this License in conveying all material for which you do
158
+ not control copyright. Those thus making or running the covered works
159
+ for you must do so exclusively on your behalf, under your direction
160
+ and control, on terms that prohibit them from making any copies of
161
+ your copyrighted material outside their relationship with you.
162
+
163
+ Conveying under any other circumstances is permitted solely under
164
+ the conditions stated below. Sublicensing is not allowed; section 10
165
+ makes it unnecessary.
166
+
167
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
168
+
169
+ No covered work shall be deemed part of an effective technological
170
+ measure under any applicable law fulfilling obligations under article
171
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
172
+ similar laws prohibiting or restricting circumvention of such
173
+ measures.
174
+
175
+ When you convey a covered work, you waive any legal power to forbid
176
+ circumvention of technological measures to the extent such circumvention
177
+ is effected by exercising rights under this License with respect to
178
+ the covered work, and you disclaim any intention to limit operation or
179
+ modification of the work as a means of enforcing, against the work's
180
+ users, your or third parties' legal rights to forbid circumvention of
181
+ technological measures.
182
+
183
+ 4. Conveying Verbatim Copies.
184
+
185
+ You may convey verbatim copies of the Program's source code as you
186
+ receive it, in any medium, provided that you conspicuously and
187
+ appropriately publish on each copy an appropriate copyright notice;
188
+ keep intact all notices stating that this License and any
189
+ non-permissive terms added in accord with section 7 apply to the code;
190
+ keep intact all notices of the absence of any warranty; and give all
191
+ recipients a copy of this License along with the Program.
192
+
193
+ You may charge any price or no price for each copy that you convey,
194
+ and you may offer support or warranty protection for a fee.
195
+
196
+ 5. Conveying Modified Source Versions.
197
+
198
+ You may convey a work based on the Program, or the modifications to
199
+ produce it from the Program, in the form of source code under the
200
+ terms of section 4, provided that you also meet all of these conditions:
201
+
202
+ a) The work must carry prominent notices stating that you modified
203
+ it, and giving a relevant date.
204
+
205
+ b) The work must carry prominent notices stating that it is
206
+ released under this License and any conditions added under section
207
+ 7. This requirement modifies the requirement in section 4 to
208
+ "keep intact all notices".
209
+
210
+ c) You must license the entire work, as a whole, under this
211
+ License to anyone who comes into possession of a copy. This
212
+ License will therefore apply, along with any applicable section 7
213
+ additional terms, to the whole of the work, and all its parts,
214
+ regardless of how they are packaged. This License gives no
215
+ permission to license the work in any other way, but it does not
216
+ invalidate such permission if you have separately received it.
217
+
218
+ d) If the work has interactive user interfaces, each must display
219
+ Appropriate Legal Notices; however, if the Program has interactive
220
+ interfaces that do not display Appropriate Legal Notices, your
221
+ work need not make them do so.
222
+
223
+ A compilation of a covered work with other separate and independent
224
+ works, which are not by their nature extensions of the covered work,
225
+ and which are not combined with it such as to form a larger program,
226
+ in or on a volume of a storage or distribution medium, is called an
227
+ "aggregate" if the compilation and its resulting copyright are not
228
+ used to limit the access or legal rights of the compilation's users
229
+ beyond what the individual works permit. Inclusion of a covered work
230
+ in an aggregate does not cause this License to apply to the other
231
+ parts of the aggregate.
232
+
233
+ 6. Conveying Non-Source Forms.
234
+
235
+ You may convey a covered work in object code form under the terms
236
+ of sections 4 and 5, provided that you also convey the
237
+ machine-readable Corresponding Source under the terms of this License,
238
+ in one of these ways:
239
+
240
+ a) Convey the object code in, or embodied in, a physical product
241
+ (including a physical distribution medium), accompanied by the
242
+ Corresponding Source fixed on a durable physical medium
243
+ customarily used for software interchange.
244
+
245
+ b) Convey the object code in, or embodied in, a physical product
246
+ (including a physical distribution medium), accompanied by a
247
+ written offer, valid for at least three years and valid for as
248
+ long as you offer spare parts or customer support for that product
249
+ model, to give anyone who possesses the object code either (1) a
250
+ copy of the Corresponding Source for all the software in the
251
+ product that is covered by this License, on a durable physical
252
+ medium customarily used for software interchange, for a price no
253
+ more than your reasonable cost of physically performing this
254
+ conveying of source, or (2) access to copy the
255
+ Corresponding Source from a network server at no charge.
256
+
257
+ c) Convey individual copies of the object code with a copy of the
258
+ written offer to provide the Corresponding Source. This
259
+ alternative is allowed only occasionally and noncommercially, and
260
+ only if you received the object code with such an offer, in accord
261
+ with subsection 6b.
262
+
263
+ d) Convey the object code by offering access from a designated
264
+ place (gratis or for a charge), and offer equivalent access to the
265
+ Corresponding Source in the same way through the same place at no
266
+ further charge. You need not require recipients to copy the
267
+ Corresponding Source along with the object code. If the place to
268
+ copy the object code is a network server, the Corresponding Source
269
+ may be on a different server (operated by you or a third party)
270
+ that supports equivalent copying facilities, provided you maintain
271
+ clear directions next to the object code saying where to find the
272
+ Corresponding Source. Regardless of what server hosts the
273
+ Corresponding Source, you remain obligated to ensure that it is
274
+ available for as long as needed to satisfy these requirements.
275
+
276
+ e) Convey the object code using peer-to-peer transmission, provided
277
+ you inform other peers where the object code and Corresponding
278
+ Source of the work are being offered to the general public at no
279
+ charge under subsection 6d.
280
+
281
+ A separable portion of the object code, whose source code is excluded
282
+ from the Corresponding Source as a System Library, need not be
283
+ included in conveying the object code work.
284
+
285
+ A "User Product" is either (1) a "consumer product", which means any
286
+ tangible personal property which is normally used for personal, family,
287
+ or household purposes, or (2) anything designed or sold for incorporation
288
+ into a dwelling. In determining whether a product is a consumer product,
289
+ doubtful cases shall be resolved in favor of coverage. For a particular
290
+ product received by a particular user, "normally used" refers to a
291
+ typical or common use of that class of product, regardless of the status
292
+ of the particular user or of the way in which the particular user
293
+ actually uses, or expects or is expected to use, the product. A product
294
+ is a consumer product regardless of whether the product has substantial
295
+ commercial, industrial or non-consumer uses, unless such uses represent
296
+ the only significant mode of use of the product.
297
+
298
+ "Installation Information" for a User Product means any methods,
299
+ procedures, authorization keys, or other information required to install
300
+ and execute modified versions of a covered work in that User Product from
301
+ a modified version of its Corresponding Source. The information must
302
+ suffice to ensure that the continued functioning of the modified object
303
+ code is in no case prevented or interfered with solely because
304
+ modification has been made.
305
+
306
+ If you convey an object code work under this section in, or with, or
307
+ specifically for use in, a User Product, and the conveying occurs as
308
+ part of a transaction in which the right of possession and use of the
309
+ User Product is transferred to the recipient in perpetuity or for a
310
+ fixed term (regardless of how the transaction is characterized), the
311
+ Corresponding Source conveyed under this section must be accompanied
312
+ by the Installation Information. But this requirement does not apply
313
+ if neither you nor any third party retains the ability to install
314
+ modified object code on the User Product (for example, the work has
315
+ been installed in ROM).
316
+
317
+ The requirement to provide Installation Information does not include a
318
+ requirement to continue to provide support service, warranty, or updates
319
+ for a work that has been modified or installed by the recipient, or for
320
+ the User Product in which it has been modified or installed. Access to a
321
+ network may be denied when the modification itself materially and
322
+ adversely affects the operation of the network or violates the rules and
323
+ protocols for communication across the network.
324
+
325
+ Corresponding Source conveyed, and Installation Information provided,
326
+ in accord with this section must be in a format that is publicly
327
+ documented (and with an implementation available to the public in
328
+ source code form), and must require no special password or key for
329
+ unpacking, reading or copying.
330
+
331
+ 7. Additional Terms.
332
+
333
+ "Additional permissions" are terms that supplement the terms of this
334
+ License by making exceptions from one or more of its conditions.
335
+ Additional permissions that are applicable to the entire Program shall
336
+ be treated as though they were included in this License, to the extent
337
+ that they are valid under applicable law. If additional permissions
338
+ apply only to part of the Program, that part may be used separately
339
+ under those permissions, but the entire Program remains governed by
340
+ this License without regard to the additional permissions.
341
+
342
+ When you convey a copy of a covered work, you may at your option
343
+ remove any additional permissions from that copy, or from any part of
344
+ it. (Additional permissions may be written to require their own
345
+ removal in certain cases when you modify the work.) You may place
346
+ additional permissions on material, added by you to a covered work,
347
+ for which you have or can give appropriate copyright permission.
348
+
349
+ Notwithstanding any other provision of this License, for material you
350
+ add to a covered work, you may (if authorized by the copyright holders of
351
+ that material) supplement the terms of this License with terms:
352
+
353
+ a) Disclaiming warranty or limiting liability differently from the
354
+ terms of sections 15 and 16 of this License; or
355
+
356
+ b) Requiring preservation of specified reasonable legal notices or
357
+ author attributions in that material or in the Appropriate Legal
358
+ Notices displayed by works containing it; or
359
+
360
+ c) Prohibiting misrepresentation of the origin of that material, or
361
+ requiring that modified versions of such material be marked in
362
+ reasonable ways as different from the original version; or
363
+
364
+ d) Limiting the use for publicity purposes of names of licensors or
365
+ authors of the material; or
366
+
367
+ e) Declining to grant rights under trademark law for use of some
368
+ trade names, trademarks, or service marks; or
369
+
370
+ f) Requiring indemnification of licensors and authors of that
371
+ material by anyone who conveys the material (or modified versions of
372
+ it) with contractual assumptions of liability to the recipient, for
373
+ any liability that these contractual assumptions directly impose on
374
+ those licensors and authors.
375
+
376
+ All other non-permissive additional terms are considered "further
377
+ restrictions" within the meaning of section 10. If the Program as you
378
+ received it, or any part of it, contains a notice stating that it is
379
+ governed by this License along with a term that is a further
380
+ restriction, you may remove that term. If a license document contains
381
+ a further restriction but permits relicensing or conveying under this
382
+ License, you may add to a covered work material governed by the terms
383
+ of that license document, provided that the further restriction does
384
+ not survive such relicensing or conveying.
385
+
386
+ If you add terms to a covered work in accord with this section, you
387
+ must place, in the relevant source files, a statement of the
388
+ additional terms that apply to those files, or a notice indicating
389
+ where to find the applicable terms.
390
+
391
+ Additional terms, permissive or non-permissive, may be stated in the
392
+ form of a separately written license, or stated as exceptions;
393
+ the above requirements apply either way.
394
+
395
+ 8. Termination.
396
+
397
+ You may not propagate or modify a covered work except as expressly
398
+ provided under this License. Any attempt otherwise to propagate or
399
+ modify it is void, and will automatically terminate your rights under
400
+ this License (including any patent licenses granted under the third
401
+ paragraph of section 11).
402
+
403
+ However, if you cease all violation of this License, then your
404
+ license from a particular copyright holder is reinstated (a)
405
+ provisionally, unless and until the copyright holder explicitly and
406
+ finally terminates your license, and (b) permanently, if the copyright
407
+ holder fails to notify you of the violation by some reasonable means
408
+ prior to 60 days after the cessation.
409
+
410
+ Moreover, your license from a particular copyright holder is
411
+ reinstated permanently if the copyright holder notifies you of the
412
+ violation by some reasonable means, this is the first time you have
413
+ received notice of violation of this License (for any work) from that
414
+ copyright holder, and you cure the violation prior to 30 days after
415
+ your receipt of the notice.
416
+
417
+ Termination of your rights under this section does not terminate the
418
+ licenses of parties who have received copies or rights from you under
419
+ this License. If your rights have been terminated and not permanently
420
+ reinstated, you do not qualify to receive new licenses for the same
421
+ material under section 10.
422
+
423
+ 9. Acceptance Not Required for Having Copies.
424
+
425
+ You are not required to accept this License in order to receive or
426
+ run a copy of the Program. Ancillary propagation of a covered work
427
+ occurring solely as a consequence of using peer-to-peer transmission
428
+ to receive a copy likewise does not require acceptance. However,
429
+ nothing other than this License grants you permission to propagate or
430
+ modify any covered work. These actions infringe copyright if you do
431
+ not accept this License. Therefore, by modifying or propagating a
432
+ covered work, you indicate your acceptance of this License to do so.
433
+
434
+ 10. Automatic Licensing of Downstream Recipients.
435
+
436
+ Each time you convey a covered work, the recipient automatically
437
+ receives a license from the original licensors, to run, modify and
438
+ propagate that work, subject to this License. You are not responsible
439
+ for enforcing compliance by third parties with this License.
440
+
441
+ An "entity transaction" is a transaction transferring control of an
442
+ organization, or substantially all assets of one, or subdividing an
443
+ organization, or merging organizations. If propagation of a covered
444
+ work results from an entity transaction, each party to that
445
+ transaction who receives a copy of the work also receives whatever
446
+ licenses to the work the party's predecessor in interest had or could
447
+ give under the previous paragraph, plus a right to possession of the
448
+ Corresponding Source of the work from the predecessor in interest, if
449
+ the predecessor has it or can get it with reasonable efforts.
450
+
451
+ You may not impose any further restrictions on the exercise of the
452
+ rights granted or affirmed under this License. For example, you may
453
+ not impose a license fee, royalty, or other charge for exercise of
454
+ rights granted under this License, and you may not initiate litigation
455
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
456
+ any patent claim is infringed by making, using, selling, offering for
457
+ sale, or importing the Program or any portion of it.
458
+
459
+ 11. Patents.
460
+
461
+ A "contributor" is a copyright holder who authorizes use under this
462
+ License of the Program or a work on which the Program is based. The
463
+ work thus licensed is called the contributor's "contributor version".
464
+
465
+ A contributor's "essential patent claims" are all patent claims
466
+ owned or controlled by the contributor, whether already acquired or
467
+ hereafter acquired, that would be infringed by some manner, permitted
468
+ by this License, of making, using, or selling its contributor version,
469
+ but do not include claims that would be infringed only as a
470
+ consequence of further modification of the contributor version. For
471
+ purposes of this definition, "control" includes the right to grant
472
+ patent sublicenses in a manner consistent with the requirements of
473
+ this License.
474
+
475
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
476
+ patent license under the contributor's essential patent claims, to
477
+ make, use, sell, offer for sale, import and otherwise run, modify and
478
+ propagate the contents of its contributor version.
479
+
480
+ In the following three paragraphs, a "patent license" is any express
481
+ agreement or commitment, however denominated, not to enforce a patent
482
+ (such as an express permission to practice a patent or covenant not to
483
+ sue for patent infringement). To "grant" such a patent license to a
484
+ party means to make such an agreement or commitment not to enforce a
485
+ patent against the party.
486
+
487
+ If you convey a covered work, knowingly relying on a patent license,
488
+ and the Corresponding Source of the work is not available for anyone
489
+ to copy, free of charge and under the terms of this License, through a
490
+ publicly available network server or other readily accessible means,
491
+ then you must either (1) cause the Corresponding Source to be so
492
+ available, or (2) arrange to deprive yourself of the benefit of the
493
+ patent license for this particular work, or (3) arrange, in a manner
494
+ consistent with the requirements of this License, to extend the patent
495
+ license to downstream recipients. "Knowingly relying" means you have
496
+ actual knowledge that, but for the patent license, your conveying the
497
+ covered work in a country, or your recipient's use of the covered work
498
+ in a country, would infringe one or more identifiable patents in that
499
+ country that you have reason to believe are valid.
500
+
501
+ If, pursuant to or in connection with a single transaction or
502
+ arrangement, you convey, or propagate by procuring conveyance of, a
503
+ covered work, and grant a patent license to some of the parties
504
+ receiving the covered work authorizing them to use, propagate, modify
505
+ or convey a specific copy of the covered work, then the patent license
506
+ you grant is automatically extended to all recipients of the covered
507
+ work and works based on it.
508
+
509
+ A patent license is "discriminatory" if it does not include within
510
+ the scope of its coverage, prohibits the exercise of, or is
511
+ conditioned on the non-exercise of one or more of the rights that are
512
+ specifically granted under this License. You may not convey a covered
513
+ work if you are a party to an arrangement with a third party that is
514
+ in the business of distributing software, under which you make payment
515
+ to the third party based on the extent of your activity of conveying
516
+ the work, and under which the third party grants, to any of the
517
+ parties who would receive the covered work from you, a discriminatory
518
+ patent license (a) in connection with copies of the covered work
519
+ conveyed by you (or copies made from those copies), or (b) primarily
520
+ for and in connection with specific products or compilations that
521
+ contain the covered work, unless you entered into that arrangement,
522
+ or that patent license was granted, prior to 28 March 2007.
523
+
524
+ Nothing in this License shall be construed as excluding or limiting
525
+ any implied license or other defenses to infringement that may
526
+ otherwise be available to you under applicable patent law.
527
+
528
+ 12. No Surrender of Others' Freedom.
529
+
530
+ If conditions are imposed on you (whether by court order, agreement or
531
+ otherwise) that contradict the conditions of this License, they do not
532
+ excuse you from the conditions of this License. If you cannot convey a
533
+ covered work so as to satisfy simultaneously your obligations under this
534
+ License and any other pertinent obligations, then as a consequence you may
535
+ not convey it at all. For example, if you agree to terms that obligate you
536
+ to collect a royalty for further conveying from those to whom you convey
537
+ the Program, the only way you could satisfy both those terms and this
538
+ License would be to refrain entirely from conveying the Program.
539
+
540
+ 13. Remote Network Interaction; Use with the GNU General Public License.
541
+
542
+ Notwithstanding any other provision of this License, if you modify the
543
+ Program, your modified version must prominently offer all users
544
+ interacting with it remotely through a computer network (if your version
545
+ supports such interaction) an opportunity to receive the Corresponding
546
+ Source of your version by providing access to the Corresponding Source
547
+ from a network server at no charge, through some standard or customary
548
+ means of facilitating copying of software. This Corresponding Source
549
+ shall include the Corresponding Source for any work covered by version 3
550
+ of the GNU General Public License that is incorporated pursuant to the
551
+ following paragraph.
552
+
553
+ Notwithstanding any other provision of this License, you have
554
+ permission to link or combine any covered work with a work licensed
555
+ under version 3 of the GNU General Public License into a single
556
+ combined work, and to convey the resulting work. The terms of this
557
+ License will continue to apply to the part which is the covered work,
558
+ but the work with which it is combined will remain governed by version
559
+ 3 of the GNU General Public License.
560
+
561
+ 14. Revised Versions of this License.
562
+
563
+ The Free Software Foundation may publish revised and/or new versions of
564
+ the GNU Affero General Public License from time to time. Such new versions
565
+ will be similar in spirit to the present version, but may differ in detail to
566
+ address new problems or concerns.
567
+
568
+ Each version is given a distinguishing version number. If the
569
+ Program specifies that a certain numbered version of the GNU Affero General
570
+ Public License "or any later version" applies to it, you have the
571
+ option of following the terms and conditions either of that numbered
572
+ version or of any later version published by the Free Software
573
+ Foundation. If the Program does not specify a version number of the
574
+ GNU Affero General Public License, you may choose any version ever published
575
+ by the Free Software Foundation.
576
+
577
+ If the Program specifies that a proxy can decide which future
578
+ versions of the GNU Affero General Public License can be used, that proxy's
579
+ public statement of acceptance of a version permanently authorizes you
580
+ to choose that version for the Program.
581
+
582
+ Later license versions may give you additional or different
583
+ permissions. However, no additional obligations are imposed on any
584
+ author or copyright holder as a result of your choosing to follow a
585
+ later version.
586
+
587
+ 15. Disclaimer of Warranty.
588
+
589
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
590
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
591
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
592
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
593
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
594
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
595
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
596
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
597
+
598
+ 16. Limitation of Liability.
599
+
600
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
601
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
602
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
603
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
604
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
605
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
606
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
607
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
608
+ SUCH DAMAGES.
609
+
610
+ 17. Interpretation of Sections 15 and 16.
611
+
612
+ If the disclaimer of warranty and limitation of liability provided
613
+ above cannot be given local legal effect according to their terms,
614
+ reviewing courts shall apply local law that most closely approximates
615
+ an absolute waiver of all civil liability in connection with the
616
+ Program, unless a warranty or assumption of liability accompanies a
617
+ copy of the Program in return for a fee.
618
+
619
+ END OF TERMS AND CONDITIONS
620
+
621
+ How to Apply These Terms to Your New Programs
622
+
623
+ If you develop a new program, and you want it to be of the greatest
624
+ possible use to the public, the best way to achieve this is to make it
625
+ free software which everyone can redistribute and change under these terms.
626
+
627
+ To do so, attach the following notices to the program. It is safest
628
+ to attach them to the start of each source file to most effectively
629
+ state the exclusion of warranty; and each file should have at least
630
+ the "copyright" line and a pointer to where the full notice is found.
631
+
632
+ <one line to give the program's name and a brief idea of what it does.>
633
+ Copyright (C) <year> <name of author>
634
+
635
+ This program is free software: you can redistribute it and/or modify
636
+ it under the terms of the GNU Affero General Public License as published by
637
+ the Free Software Foundation, either version 3 of the License, or
638
+ (at your option) any later version.
639
+
640
+ This program is distributed in the hope that it will be useful,
641
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
642
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
643
+ GNU Affero General Public License for more details.
644
+
645
+ You should have received a copy of the GNU Affero General Public License
646
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
647
+
648
+ Also add information on how to contact you by electronic and paper mail.
649
+
650
+ If your software can interact with users remotely through a computer
651
+ network, you should also make sure that it provides a way for users to
652
+ get its source. For example, if your program is a web application, its
653
+ interface could display a "Source" link that leads users to an archive
654
+ of the code. There are many ways you could offer source, and different
655
+ solutions will be better for different programs; see section 13 for the
656
+ specific requirements.
657
+
658
+ You should also get your employer (if you work as a programmer) or school,
659
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
660
+ For more information on this, and how to apply and follow the GNU AGPL, see
661
+ <https://www.gnu.org/licenses/>.
app.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import cv2
3
+ import tempfile
4
+ from ultralytics import YOLOv10
5
+
6
+
7
+ def yolov10_inference(image, video, model_id, image_size, conf_threshold):
8
+ model = YOLOv10.from_pretrained(f'jameslahm/{model_id}')
9
+ if image:
10
+ results = model.predict(source=image, imgsz=image_size, conf=conf_threshold)
11
+ annotated_image = results[0].plot()
12
+ return annotated_image[:, :, ::-1], None
13
+ else:
14
+ video_path = tempfile.mktemp(suffix=".webm")
15
+ with open(video_path, "wb") as f:
16
+ with open(video, "rb") as g:
17
+ f.write(g.read())
18
+
19
+ cap = cv2.VideoCapture(video_path)
20
+ fps = cap.get(cv2.CAP_PROP_FPS)
21
+ frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
22
+ frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
23
+
24
+ output_video_path = tempfile.mktemp(suffix=".webm")
25
+ out = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*'vp80'), fps, (frame_width, frame_height))
26
+
27
+ while cap.isOpened():
28
+ ret, frame = cap.read()
29
+ if not ret:
30
+ break
31
+
32
+ results = model.predict(source=frame, imgsz=image_size, conf=conf_threshold)
33
+ annotated_frame = results[0].plot()
34
+ out.write(annotated_frame)
35
+
36
+ cap.release()
37
+ out.release()
38
+
39
+ return None, output_video_path
40
+
41
+
42
+ def yolov10_inference_for_examples(image, model_path, image_size, conf_threshold):
43
+ annotated_image, _ = yolov10_inference(image, None, model_path, image_size, conf_threshold)
44
+ return annotated_image
45
+
46
+
47
+ def app():
48
+ with gr.Blocks():
49
+ with gr.Row():
50
+ with gr.Column():
51
+ image = gr.Image(type="pil", label="Image", visible=True)
52
+ video = gr.Video(label="Video", visible=False)
53
+ input_type = gr.Radio(
54
+ choices=["Image", "Video"],
55
+ value="Image",
56
+ label="Input Type",
57
+ )
58
+ model_id = gr.Dropdown(
59
+ label="Model",
60
+ choices=[
61
+ "yolov10n",
62
+ "yolov10s",
63
+ "yolov10m",
64
+ "yolov10b",
65
+ "yolov10l",
66
+ "yolov10x",
67
+ ],
68
+ value="yolov10m",
69
+ )
70
+ image_size = gr.Slider(
71
+ label="Image Size",
72
+ minimum=320,
73
+ maximum=1280,
74
+ step=32,
75
+ value=640,
76
+ )
77
+ conf_threshold = gr.Slider(
78
+ label="Confidence Threshold",
79
+ minimum=0.0,
80
+ maximum=1.0,
81
+ step=0.05,
82
+ value=0.25,
83
+ )
84
+ yolov10_infer = gr.Button(value="Detect Objects")
85
+
86
+ with gr.Column():
87
+ output_image = gr.Image(type="numpy", label="Annotated Image", visible=True)
88
+ output_video = gr.Video(label="Annotated Video", visible=False)
89
+
90
+ def update_visibility(input_type):
91
+ image = gr.update(visible=True) if input_type == "Image" else gr.update(visible=False)
92
+ video = gr.update(visible=False) if input_type == "Image" else gr.update(visible=True)
93
+ output_image = gr.update(visible=True) if input_type == "Image" else gr.update(visible=False)
94
+ output_video = gr.update(visible=False) if input_type == "Image" else gr.update(visible=True)
95
+
96
+ return image, video, output_image, output_video
97
+
98
+ input_type.change(
99
+ fn=update_visibility,
100
+ inputs=[input_type],
101
+ outputs=[image, video, output_image, output_video],
102
+ )
103
+
104
+ def run_inference(image, video, model_id, image_size, conf_threshold, input_type):
105
+ if input_type == "Image":
106
+ return yolov10_inference(image, None, model_id, image_size, conf_threshold)
107
+ else:
108
+ return yolov10_inference(None, video, model_id, image_size, conf_threshold)
109
+
110
+
111
+ yolov10_infer.click(
112
+ fn=run_inference,
113
+ inputs=[image, video, model_id, image_size, conf_threshold, input_type],
114
+ outputs=[output_image, output_video],
115
+ )
116
+
117
+ gr.Examples(
118
+ examples=[
119
+ [
120
+ "ultralytics/assets/bus.jpg",
121
+ "yolov10s",
122
+ 640,
123
+ 0.25,
124
+ ],
125
+ [
126
+ "ultralytics/assets/zidane.jpg",
127
+ "yolov10s",
128
+ 640,
129
+ 0.25,
130
+ ],
131
+ ],
132
+ fn=yolov10_inference_for_examples,
133
+ inputs=[
134
+ image,
135
+ model_id,
136
+ image_size,
137
+ conf_threshold,
138
+ ],
139
+ outputs=[output_image],
140
+ cache_examples='lazy',
141
+ )
142
+
143
+ gradio_app = gr.Blocks()
144
+ with gradio_app:
145
+ gr.HTML(
146
+ """
147
+ <h1 style='text-align: center'>
148
+ YOLOv10: Real-Time End-to-End Object Detection
149
+ </h1>
150
+ """)
151
+ gr.HTML(
152
+ """
153
+ <h3 style='text-align: center'>
154
+ <a href='https://arxiv.org/abs/2405.14458' target='_blank'>arXiv</a> | <a href='https://github.com/THU-MIG/yolov10' target='_blank'>github</a>
155
+ </h3>
156
+ """)
157
+ with gr.Row():
158
+ with gr.Column():
159
+ app()
160
+ if __name__ == '__main__':
161
+ gradio_app.launch()
docker/Dockerfile ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds ultralytics/ultralytics:latest image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference
4
+
5
+ # Start FROM PyTorch image https://hub.docker.com/r/pytorch/pytorch or nvcr.io/nvidia/pytorch:23.03-py3
6
+ FROM pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime
7
+ RUN pip install --no-cache nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com
8
+
9
+ # Downloads to user config dir
10
+ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
11
+ https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
12
+ /root/.config/Ultralytics/
13
+
14
+ # Install linux packages
15
+ # g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
16
+ RUN apt update \
17
+ && apt install --no-install-recommends -y gcc git zip curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 build-essential
18
+
19
+ # Security updates
20
+ # https://security.snyk.io/vuln/SNYK-UBUNTU1804-OPENSSL-3314796
21
+ RUN apt upgrade --no-install-recommends -y openssl tar
22
+
23
+ # Create working directory
24
+ WORKDIR /usr/src/ultralytics
25
+
26
+ # Copy contents
27
+ # COPY . /usr/src/ultralytics # git permission issues inside container
28
+ RUN git clone https://github.com/ultralytics/ultralytics -b main /usr/src/ultralytics
29
+ ADD https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt /usr/src/ultralytics/
30
+
31
+ # Install pip packages
32
+ RUN python3 -m pip install --upgrade pip wheel
33
+ RUN pip install --no-cache -e ".[export]" albumentations comet pycocotools
34
+
35
+ # Run exports to AutoInstall packages
36
+ # Edge TPU export fails the first time so is run twice here
37
+ RUN yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32 || yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32
38
+ RUN yolo export model=tmp/yolov8n.pt format=ncnn imgsz=32
39
+ # Requires <= Python 3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
40
+ RUN pip install --no-cache paddlepaddle>=2.6.0 x2paddle
41
+ # Fix error: `np.bool` was a deprecated alias for the builtin `bool` segmentation error in Tests
42
+ RUN pip install --no-cache numpy==1.23.5
43
+ # Remove exported models
44
+ RUN rm -rf tmp
45
+
46
+ # Set environment variables
47
+ ENV OMP_NUM_THREADS=1
48
+ # Avoid DDP error "MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library" https://github.com/pytorch/pytorch/issues/37377
49
+ ENV MKL_THREADING_LAYER=GNU
50
+
51
+
52
+ # Usage Examples -------------------------------------------------------------------------------------------------------
53
+
54
+ # Build and Push
55
+ # t=ultralytics/ultralytics:latest && sudo docker build -f docker/Dockerfile -t $t . && sudo docker push $t
56
+
57
+ # Pull and Run with access to all GPUs
58
+ # t=ultralytics/ultralytics:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t
59
+
60
+ # Pull and Run with access to GPUs 2 and 3 (inside container CUDA devices will appear as 0 and 1)
61
+ # t=ultralytics/ultralytics:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus '"device=2,3"' $t
62
+
63
+ # Pull and Run with local directory access
64
+ # t=ultralytics/ultralytics:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/datasets:/usr/src/datasets $t
65
+
66
+ # Kill all
67
+ # sudo docker kill $(sudo docker ps -q)
68
+
69
+ # Kill all image-based
70
+ # sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/ultralytics:latest)
71
+
72
+ # DockerHub tag update
73
+ # t=ultralytics/ultralytics:latest tnew=ultralytics/ultralytics:v6.2 && sudo docker pull $t && sudo docker tag $t $tnew && sudo docker push $tnew
74
+
75
+ # Clean up
76
+ # sudo docker system prune -a --volumes
77
+
78
+ # Update Ubuntu drivers
79
+ # https://www.maketecheasier.com/install-nvidia-drivers-ubuntu/
80
+
81
+ # DDP test
82
+ # python -m torch.distributed.run --nproc_per_node 2 --master_port 1 train.py --epochs 3
83
+
84
+ # GCP VM from Image
85
+ # docker.io/ultralytics/ultralytics:latest
docker/Dockerfile-arm64 ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds ultralytics/ultralytics:latest-arm64 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Image is aarch64-compatible for Apple M1, M2, M3, Raspberry Pi and other ARM architectures
4
+
5
+ # Start FROM Ubuntu image https://hub.docker.com/_/ubuntu with "FROM arm64v8/ubuntu:22.04" (deprecated)
6
+ # Start FROM Debian image for arm64v8 https://hub.docker.com/r/arm64v8/debian (new)
7
+ FROM arm64v8/debian:bookworm-slim
8
+
9
+ # Downloads to user config dir
10
+ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
11
+ https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
12
+ /root/.config/Ultralytics/
13
+
14
+ # Install linux packages
15
+ # g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
16
+ # cmake and build-essential is needed to build onnxsim when exporting to tflite
17
+ RUN apt update \
18
+ && apt install --no-install-recommends -y python3-pip git zip curl htop gcc libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0 build-essential
19
+
20
+ # Create working directory
21
+ WORKDIR /usr/src/ultralytics
22
+
23
+ # Copy contents
24
+ # COPY . /usr/src/ultralytics # git permission issues inside container
25
+ RUN git clone https://github.com/ultralytics/ultralytics -b main /usr/src/ultralytics
26
+ ADD https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt /usr/src/ultralytics/
27
+
28
+ # Remove python3.11/EXTERNALLY-MANAGED to avoid 'externally-managed-environment' issue, Debian 12 Bookworm error
29
+ RUN rm -rf /usr/lib/python3.11/EXTERNALLY-MANAGED
30
+
31
+ # Install pip packages
32
+ RUN python3 -m pip install --upgrade pip wheel
33
+ RUN pip install --no-cache -e ".[export]"
34
+
35
+ # Creates a symbolic link to make 'python' point to 'python3'
36
+ RUN ln -sf /usr/bin/python3 /usr/bin/python
37
+
38
+
39
+ # Usage Examples -------------------------------------------------------------------------------------------------------
40
+
41
+ # Build and Push
42
+ # t=ultralytics/ultralytics:latest-arm64 && sudo docker build --platform linux/arm64 -f docker/Dockerfile-arm64 -t $t . && sudo docker push $t
43
+
44
+ # Run
45
+ # t=ultralytics/ultralytics:latest-arm64 && sudo docker run -it --ipc=host $t
46
+
47
+ # Pull and Run
48
+ # t=ultralytics/ultralytics:latest-arm64 && sudo docker pull $t && sudo docker run -it --ipc=host $t
49
+
50
+ # Pull and Run with local volume mounted
51
+ # t=ultralytics/ultralytics:latest-arm64 && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/datasets:/usr/src/datasets $t
docker/Dockerfile-conda ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds ultralytics/ultralytics:latest-conda image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Image is optimized for Ultralytics Anaconda (https://anaconda.org/conda-forge/ultralytics) installation and usage
4
+
5
+ # Start FROM miniconda3 image https://hub.docker.com/r/continuumio/miniconda3
6
+ FROM continuumio/miniconda3:latest
7
+
8
+ # Downloads to user config dir
9
+ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
10
+ https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
11
+ /root/.config/Ultralytics/
12
+
13
+ # Install linux packages
14
+ RUN apt update \
15
+ && apt install --no-install-recommends -y libgl1
16
+
17
+ # Copy contents
18
+ ADD https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt .
19
+
20
+ # Install conda packages
21
+ # mkl required to fix 'OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory'
22
+ RUN conda config --set solver libmamba && \
23
+ conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia && \
24
+ conda install -c conda-forge ultralytics mkl
25
+ # conda install -c pytorch -c nvidia -c conda-forge pytorch torchvision pytorch-cuda=11.8 ultralytics mkl
26
+
27
+
28
+ # Usage Examples -------------------------------------------------------------------------------------------------------
29
+
30
+ # Build and Push
31
+ # t=ultralytics/ultralytics:latest-conda && sudo docker build -f docker/Dockerfile-cpu -t $t . && sudo docker push $t
32
+
33
+ # Run
34
+ # t=ultralytics/ultralytics:latest-conda && sudo docker run -it --ipc=host $t
35
+
36
+ # Pull and Run
37
+ # t=ultralytics/ultralytics:latest-conda && sudo docker pull $t && sudo docker run -it --ipc=host $t
38
+
39
+ # Pull and Run with local volume mounted
40
+ # t=ultralytics/ultralytics:latest-conda && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/datasets:/usr/src/datasets $t
docker/Dockerfile-cpu ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLOv8 deployments
4
+
5
+ # Start FROM Ubuntu image https://hub.docker.com/_/ubuntu
6
+ FROM ubuntu:23.10
7
+
8
+ # Downloads to user config dir
9
+ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
10
+ https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
11
+ /root/.config/Ultralytics/
12
+
13
+ # Install linux packages
14
+ # g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
15
+ RUN apt update \
16
+ && apt install --no-install-recommends -y python3-pip git zip curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
17
+
18
+ # Create working directory
19
+ WORKDIR /usr/src/ultralytics
20
+
21
+ # Copy contents
22
+ # COPY . /usr/src/ultralytics # git permission issues inside container
23
+ RUN git clone https://github.com/ultralytics/ultralytics -b main /usr/src/ultralytics
24
+ ADD https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt /usr/src/ultralytics/
25
+
26
+ # Remove python3.11/EXTERNALLY-MANAGED or use 'pip install --break-system-packages' avoid 'externally-managed-environment' Ubuntu nightly error
27
+ RUN rm -rf /usr/lib/python3.11/EXTERNALLY-MANAGED
28
+
29
+ # Install pip packages
30
+ RUN python3 -m pip install --upgrade pip wheel
31
+ RUN pip install --no-cache -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
32
+
33
+ # Run exports to AutoInstall packages
34
+ RUN yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32
35
+ RUN yolo export model=tmp/yolov8n.pt format=ncnn imgsz=32
36
+ # Requires <= Python 3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
37
+ # RUN pip install --no-cache paddlepaddle>=2.6.0 x2paddle
38
+ # Remove exported models
39
+ RUN rm -rf tmp
40
+
41
+ # Creates a symbolic link to make 'python' point to 'python3'
42
+ RUN ln -sf /usr/bin/python3 /usr/bin/python
43
+
44
+
45
+ # Usage Examples -------------------------------------------------------------------------------------------------------
46
+
47
+ # Build and Push
48
+ # t=ultralytics/ultralytics:latest-cpu && sudo docker build -f docker/Dockerfile-cpu -t $t . && sudo docker push $t
49
+
50
+ # Run
51
+ # t=ultralytics/ultralytics:latest-cpu && sudo docker run -it --ipc=host --name NAME $t
52
+
53
+ # Pull and Run
54
+ # t=ultralytics/ultralytics:latest-cpu && sudo docker pull $t && sudo docker run -it --ipc=host --name NAME $t
55
+
56
+ # Pull and Run with local volume mounted
57
+ # t=ultralytics/ultralytics:latest-cpu && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/datasets:/usr/src/datasets $t
docker/Dockerfile-jetson ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds ultralytics/ultralytics:jetson image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Supports JetPack for YOLOv8 on Jetson Nano, TX1/TX2, Xavier NX, AGX Xavier, AGX Orin, and Orin NX
4
+
5
+ # Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch
6
+ FROM nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3
7
+
8
+ # Downloads to user config dir
9
+ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
10
+ https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
11
+ /root/.config/Ultralytics/
12
+
13
+ # Install linux packages
14
+ # g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
15
+ RUN apt update \
16
+ && apt install --no-install-recommends -y gcc git zip curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
17
+
18
+ # Create working directory
19
+ WORKDIR /usr/src/ultralytics
20
+
21
+ # Copy contents
22
+ # COPY . /usr/src/ultralytics # git permission issues inside container
23
+ RUN git clone https://github.com/ultralytics/ultralytics -b main /usr/src/ultralytics
24
+ ADD https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt /usr/src/ultralytics/
25
+
26
+ # Remove opencv-python from Ultralytics dependencies as it conflicts with opencv-python installed in base image
27
+ RUN grep -v "opencv-python" pyproject.toml > temp.toml && mv temp.toml pyproject.toml
28
+
29
+ # Install pip packages manually for TensorRT compatibility https://github.com/NVIDIA/TensorRT/issues/2567
30
+ RUN python3 -m pip install --upgrade pip wheel
31
+ RUN pip install --no-cache tqdm matplotlib pyyaml psutil pandas onnx "numpy==1.23"
32
+ RUN pip install --no-cache -e .
33
+
34
+ # Set environment variables
35
+ ENV OMP_NUM_THREADS=1
36
+
37
+
38
+ # Usage Examples -------------------------------------------------------------------------------------------------------
39
+
40
+ # Build and Push
41
+ # t=ultralytics/ultralytics:latest-jetson && sudo docker build --platform linux/arm64 -f docker/Dockerfile-jetson -t $t . && sudo docker push $t
42
+
43
+ # Run
44
+ # t=ultralytics/ultralytics:latest-jetson && sudo docker run -it --ipc=host $t
45
+
46
+ # Pull and Run
47
+ # t=ultralytics/ultralytics:latest-jetson && sudo docker pull $t && sudo docker run -it --ipc=host $t
48
+
49
+ # Pull and Run with NVIDIA runtime
50
+ # t=ultralytics/ultralytics:latest-jetson && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
docker/Dockerfile-python ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds ultralytics/ultralytics:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLOv8 deployments
4
+
5
+ # Use the official Python 3.10 slim-bookworm as base image
6
+ FROM python:3.10-slim-bookworm
7
+
8
+ # Downloads to user config dir
9
+ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
10
+ https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
11
+ /root/.config/Ultralytics/
12
+
13
+ # Install linux packages
14
+ # g++ required to build 'tflite_support' and 'lap' packages, libusb-1.0-0 required for 'tflite_support' package
15
+ RUN apt update \
16
+ && apt install --no-install-recommends -y python3-pip git zip curl htop libgl1 libglib2.0-0 libpython3-dev gnupg g++ libusb-1.0-0
17
+
18
+ # Create working directory
19
+ WORKDIR /usr/src/ultralytics
20
+
21
+ # Copy contents
22
+ # COPY . /usr/src/ultralytics # git permission issues inside container
23
+ RUN git clone https://github.com/ultralytics/ultralytics -b main /usr/src/ultralytics
24
+ ADD https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt /usr/src/ultralytics/
25
+
26
+ # Remove python3.11/EXTERNALLY-MANAGED or use 'pip install --break-system-packages' avoid 'externally-managed-environment' Ubuntu nightly error
27
+ # RUN rm -rf /usr/lib/python3.11/EXTERNALLY-MANAGED
28
+
29
+ # Install pip packages
30
+ RUN python3 -m pip install --upgrade pip wheel
31
+ RUN pip install --no-cache -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
32
+
33
+ # Run exports to AutoInstall packages
34
+ RUN yolo export model=tmp/yolov8n.pt format=edgetpu imgsz=32
35
+ RUN yolo export model=tmp/yolov8n.pt format=ncnn imgsz=32
36
+ # Requires <= Python 3.10, bug with paddlepaddle==2.5.0 https://github.com/PaddlePaddle/X2Paddle/issues/991
37
+ RUN pip install --no-cache paddlepaddle>=2.6.0 x2paddle
38
+ # Remove exported models
39
+ RUN rm -rf tmp
40
+
41
+
42
+ # Usage Examples -------------------------------------------------------------------------------------------------------
43
+
44
+ # Build and Push
45
+ # t=ultralytics/ultralytics:latest-python && sudo docker build -f docker/Dockerfile-python -t $t . && sudo docker push $t
46
+
47
+ # Run
48
+ # t=ultralytics/ultralytics:latest-python && sudo docker run -it --ipc=host $t
49
+
50
+ # Pull and Run
51
+ # t=ultralytics/ultralytics:latest-python && sudo docker pull $t && sudo docker run -it --ipc=host $t
52
+
53
+ # Pull and Run with local volume mounted
54
+ # t=ultralytics/ultralytics:latest-python && sudo docker pull $t && sudo docker run -it --ipc=host -v "$(pwd)"/datasets:/usr/src/datasets $t
docker/Dockerfile-runner ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ # Builds GitHub actions CI runner image for deployment to DockerHub https://hub.docker.com/r/ultralytics/ultralytics
3
+ # Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference tests
4
+
5
+ # Start FROM Ultralytics GPU image
6
+ FROM ultralytics/ultralytics:latest
7
+
8
+ # Set the working directory
9
+ WORKDIR /actions-runner
10
+
11
+ # Download and unpack the latest runner from https://github.com/actions/runner
12
+ RUN FILENAME=actions-runner-linux-x64-2.309.0.tar.gz && \
13
+ curl -o $FILENAME -L https://github.com/actions/runner/releases/download/v2.309.0/$FILENAME && \
14
+ tar xzf $FILENAME && \
15
+ rm $FILENAME
16
+
17
+ # Install runner dependencies
18
+ ENV RUNNER_ALLOW_RUNASROOT=1
19
+ ENV DEBIAN_FRONTEND=noninteractive
20
+ RUN ./bin/installdependencies.sh && \
21
+ apt-get -y install libicu-dev
22
+
23
+ # Inline ENTRYPOINT command to configure and start runner with default TOKEN and NAME
24
+ ENTRYPOINT sh -c './config.sh --url https://github.com/ultralytics/ultralytics \
25
+ --token ${GITHUB_RUNNER_TOKEN:-TOKEN} \
26
+ --name ${GITHUB_RUNNER_NAME:-NAME} \
27
+ --labels gpu-latest \
28
+ --replace && \
29
+ ./run.sh'
30
+
31
+
32
+ # Usage Examples -------------------------------------------------------------------------------------------------------
33
+
34
+ # Build and Push
35
+ # t=ultralytics/ultralytics:latest-runner && sudo docker build -f docker/Dockerfile-runner -t $t . && sudo docker push $t
36
+
37
+ # Pull and Run in detached mode with access to GPUs 0 and 1
38
+ # t=ultralytics/ultralytics:latest-runner && sudo docker run -d -e GITHUB_RUNNER_TOKEN=TOKEN -e GITHUB_RUNNER_NAME=NAME --ipc=host --gpus '"device=0,1"' $t
docs/README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <br>
2
+ <img src="https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Original.svg" width="320">
3
+
4
+ # 📚 Ultralytics Docs
5
+
6
+ Ultralytics Docs are the gateway to understanding and utilizing our cutting-edge machine learning tools. These documents are deployed to [https://docs.ultralytics.com](https://docs.ultralytics.com) for your convenience.
7
+
8
+ [![pages-build-deployment](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment) [![Check Broken links](https://github.com/ultralytics/docs/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/links.yml) [![Check Domains](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml) [![Ultralytics Actions](https://github.com/ultralytics/docs/actions/workflows/format.yml/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/format.yml) <a href="https://ultralytics.com/discord"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
9
+
10
+ ## 🛠️ Installation
11
+
12
+ [![PyPI version](https://badge.fury.io/py/ultralytics.svg)](https://badge.fury.io/py/ultralytics) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics)
13
+
14
+ To install the ultralytics package in developer mode, ensure you have Git and Python 3 installed on your system. Then, follow these steps:
15
+
16
+ 1. Clone the ultralytics repository to your local machine using Git:
17
+
18
+ ```bash
19
+ git clone https://github.com/ultralytics/ultralytics.git
20
+ ```
21
+
22
+ 2. Navigate to the cloned repository's root directory:
23
+
24
+ ```bash
25
+ cd ultralytics
26
+ ```
27
+
28
+ 3. Install the package in developer mode using pip (or pip3 for Python 3):
29
+
30
+ ```bash
31
+ pip install -e '.[dev]'
32
+ ```
33
+
34
+ - This command installs the ultralytics package along with all development dependencies, allowing you to modify the package code and have the changes immediately reflected in your Python environment.
35
+
36
+ ## 🚀 Building and Serving Locally
37
+
38
+ The `mkdocs serve` command builds and serves a local version of your MkDocs documentation, ideal for development and testing:
39
+
40
+ ```bash
41
+ mkdocs serve
42
+ ```
43
+
44
+ - #### Command Breakdown:
45
+
46
+ - `mkdocs` is the main MkDocs command-line interface.
47
+ - `serve` is the subcommand to build and locally serve your documentation.
48
+
49
+ - 🧐 Note:
50
+
51
+ - Grasp changes to the docs in real-time as `mkdocs serve` supports live reloading.
52
+ - To stop the local server, press `CTRL+C`.
53
+
54
+ ## 🌍 Building and Serving Multi-Language
55
+
56
+ Supporting multi-language documentation? Follow these steps:
57
+
58
+ 1. Stage all new language \*.md files with Git:
59
+
60
+ ```bash
61
+ git add docs/**/*.md -f
62
+ ```
63
+
64
+ 2. Build all languages to the `/site` folder, ensuring relevant root-level files are present:
65
+
66
+ ```bash
67
+ # Clear existing /site directory
68
+ rm -rf site
69
+
70
+ # Loop through each language config file and build
71
+ mkdocs build -f docs/mkdocs.yml
72
+ for file in docs/mkdocs_*.yml; do
73
+ echo "Building MkDocs site with $file"
74
+ mkdocs build -f "$file"
75
+ done
76
+ ```
77
+
78
+ 3. To preview your site, initiate a simple HTTP server:
79
+
80
+ ```bash
81
+ cd site
82
+ python -m http.server
83
+ # Open in your preferred browser
84
+ ```
85
+
86
+ - 🖥️ Access the live site at `http://localhost:8000`.
87
+
88
+ ## 📤 Deploying Your Documentation Site
89
+
90
+ Choose a hosting provider and deployment method for your MkDocs documentation:
91
+
92
+ - Configure `mkdocs.yml` with deployment settings.
93
+ - Use `mkdocs deploy` to build and deploy your site.
94
+
95
+ * ### GitHub Pages Deployment Example:
96
+ ```bash
97
+ mkdocs gh-deploy
98
+ ```
99
+
100
+ - Update the "Custom domain" in your repository's settings for a personalized URL.
101
+
102
+ ![196814117-fc16e711-d2be-4722-9536-b7c6d78fd167](https://user-images.githubusercontent.com/26833433/210150206-9e86dcd7-10af-43e4-9eb2-9518b3799eac.png)
103
+
104
+ - For detailed deployment guidance, consult the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/).
105
+
106
+ ## 💡 Contribute
107
+
108
+ We cherish the community's input as it drives Ultralytics open-source initiatives. Dive into the [Contributing Guide](https://docs.ultralytics.com/help/contributing) and share your thoughts via our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A heartfelt thank you 🙏 to each contributor!
109
+
110
+ <!-- Pictorial representation of our dedicated contributor community -->
111
+
112
+ ![Ultralytics open-source contributors](https://github.com/ultralytics/assets/raw/main/im/image-contributors.png)
113
+
114
+ ## 📜 License
115
+
116
+ Ultralytics presents two licensing options:
117
+
118
+ - **AGPL-3.0 License**: Perfect for academia and open collaboration. Details are in the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file.
119
+ - **Enterprise License**: Tailored for commercial usage, offering a seamless blend of Ultralytics technology in your products. Learn more at [Ultralytics Licensing](https://ultralytics.com/license).
120
+
121
+ ## ✉️ Contact
122
+
123
+ For bug reports and feature requests, navigate to [GitHub Issues](https://github.com/ultralytics/docs/issues). Engage with peers and the Ultralytics team on [Discord](https://ultralytics.com/discord) for enriching conversations!
124
+
125
+ <br>
126
+ <div align="center">
127
+ <a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
128
+ <img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
129
+ <a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
130
+ <img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
131
+ <a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
132
+ <img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
133
+ <a href="https://youtube.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
134
+ <img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
135
+ <a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
136
+ <img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
137
+ <a href="https://www.instagram.com/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="3%" alt="Ultralytics Instagram"></a>
138
+ <img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
139
+ <a href="https://ultralytics.com/discord"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
140
+ </div>
docs/build_docs.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ """
3
+ This Python script is designed to automate the building and post-processing of MkDocs documentation, particularly for
4
+ projects with multilingual content. It streamlines the workflow for generating localized versions of the documentation
5
+ and updating HTML links to ensure they are correctly formatted.
6
+
7
+ Key Features:
8
+ - Automated building of MkDocs documentation: The script compiles both the main documentation and
9
+ any localized versions specified in separate MkDocs configuration files.
10
+ - Post-processing of generated HTML files: After the documentation is built, the script updates all
11
+ HTML files to remove the '.md' extension from internal links. This ensures that links in the built
12
+ HTML documentation correctly point to other HTML pages rather than Markdown files, which is crucial
13
+ for proper navigation within the web-based documentation.
14
+
15
+ Usage:
16
+ - Run the script from the root directory of your MkDocs project.
17
+ - Ensure that MkDocs is installed and that all MkDocs configuration files (main and localized versions)
18
+ are present in the project directory.
19
+ - The script first builds the documentation using MkDocs, then scans the generated HTML files in the 'site'
20
+ directory to update the internal links.
21
+ - It's ideal for projects where the documentation is written in Markdown and needs to be served as a static website.
22
+
23
+ Note:
24
+ - This script is built to be run in an environment where Python and MkDocs are installed and properly configured.
25
+ """
26
+
27
+ import os
28
+ import re
29
+ import shutil
30
+ import subprocess
31
+ from pathlib import Path
32
+
33
+ from tqdm import tqdm
34
+
35
+ DOCS = Path(__file__).parent.resolve()
36
+ SITE = DOCS.parent / "site"
37
+
38
+
39
+ def build_docs(clone_repos=True):
40
+ """Build docs using mkdocs."""
41
+ if SITE.exists():
42
+ print(f"Removing existing {SITE}")
43
+ shutil.rmtree(SITE)
44
+
45
+ # Get hub-sdk repo
46
+ if clone_repos:
47
+ repo = "https://github.com/ultralytics/hub-sdk"
48
+ local_dir = DOCS.parent / Path(repo).name
49
+ if not local_dir.exists():
50
+ os.system(f"git clone {repo} {local_dir}")
51
+ os.system(f"git -C {local_dir} pull") # update repo
52
+ shutil.rmtree(DOCS / "en/hub/sdk", ignore_errors=True) # delete if exists
53
+ shutil.copytree(local_dir / "docs", DOCS / "en/hub/sdk") # for docs
54
+ shutil.rmtree(DOCS.parent / "hub_sdk", ignore_errors=True) # delete if exists
55
+ shutil.copytree(local_dir / "hub_sdk", DOCS.parent / "hub_sdk") # for mkdocstrings
56
+ print(f"Cloned/Updated {repo} in {local_dir}")
57
+
58
+ # Build the main documentation
59
+ print(f"Building docs from {DOCS}")
60
+ subprocess.run(f"mkdocs build -f {DOCS.parent}/mkdocs.yml", check=True, shell=True)
61
+ print(f"Site built at {SITE}")
62
+
63
+
64
+ def update_page_title(file_path: Path, new_title: str):
65
+ """Update the title of an HTML file."""
66
+
67
+ # Read the content of the file
68
+ with open(file_path, encoding="utf-8") as file:
69
+ content = file.read()
70
+
71
+ # Replace the existing title with the new title
72
+ updated_content = re.sub(r"<title>.*?</title>", f"<title>{new_title}</title>", content)
73
+
74
+ # Write the updated content back to the file
75
+ with open(file_path, "w", encoding="utf-8") as file:
76
+ file.write(updated_content)
77
+
78
+
79
+ def update_html_head(script=""):
80
+ """Update the HTML head section of each file."""
81
+ html_files = Path(SITE).rglob("*.html")
82
+ for html_file in tqdm(html_files, desc="Processing HTML files"):
83
+ with html_file.open("r", encoding="utf-8") as file:
84
+ html_content = file.read()
85
+
86
+ if script in html_content: # script already in HTML file
87
+ return
88
+
89
+ head_end_index = html_content.lower().rfind("</head>")
90
+ if head_end_index != -1:
91
+ # Add the specified JavaScript to the HTML file just before the end of the head tag.
92
+ new_html_content = html_content[:head_end_index] + script + html_content[head_end_index:]
93
+ with html_file.open("w", encoding="utf-8") as file:
94
+ file.write(new_html_content)
95
+
96
+
97
+ def update_subdir_edit_links(subdir="", docs_url=""):
98
+ """Update the HTML head section of each file."""
99
+ from bs4 import BeautifulSoup
100
+
101
+ if str(subdir[0]) == "/":
102
+ subdir = str(subdir[0])[1:]
103
+ html_files = (SITE / subdir).rglob("*.html")
104
+ for html_file in tqdm(html_files, desc="Processing subdir files"):
105
+ with html_file.open("r", encoding="utf-8") as file:
106
+ soup = BeautifulSoup(file, "html.parser")
107
+
108
+ # Find the anchor tag and update its href attribute
109
+ a_tag = soup.find("a", {"class": "md-content__button md-icon"})
110
+ if a_tag and a_tag["title"] == "Edit this page":
111
+ a_tag["href"] = f"{docs_url}{a_tag['href'].split(subdir)[-1]}"
112
+
113
+ # Write the updated HTML back to the file
114
+ with open(html_file, "w", encoding="utf-8") as file:
115
+ file.write(str(soup))
116
+
117
+
118
+ def main():
119
+ """Builds docs, updates titles and edit links, and prints local server command."""
120
+ build_docs()
121
+
122
+ # Update titles
123
+ update_page_title(SITE / "404.html", new_title="Ultralytics Docs - Not Found")
124
+
125
+ # Update edit links
126
+ update_subdir_edit_links(
127
+ subdir="hub/sdk/", # do not use leading slash
128
+ docs_url="https://github.com/ultralytics/hub-sdk/tree/develop/docs/",
129
+ )
130
+
131
+ # Update HTML file head section
132
+ script = ""
133
+ if any(script):
134
+ update_html_head(script)
135
+
136
+ # Show command to serve built website
137
+ print('Serve site at http://localhost:8000 with "python -m http.server --directory site"')
138
+
139
+
140
+ if __name__ == "__main__":
141
+ main()
docs/build_reference.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ultralytics YOLO 🚀, AGPL-3.0 license
2
+ """
3
+ Helper file to build Ultralytics Docs reference section. Recursively walks through ultralytics dir and builds an MkDocs
4
+ reference section of *.md files composed of classes and functions, and also creates a nav menu for use in mkdocs.yaml.
5
+
6
+ Note: Must be run from repository root directory. Do not run from docs directory.
7
+ """
8
+
9
+ import re
10
+ from collections import defaultdict
11
+ from pathlib import Path
12
+
13
+ # Get package root i.e. /Users/glennjocher/PycharmProjects/ultralytics/ultralytics
14
+ from ultralytics.utils import ROOT as PACKAGE_DIR
15
+
16
+ # Constants
17
+ REFERENCE_DIR = PACKAGE_DIR.parent / "docs/en/reference"
18
+ GITHUB_REPO = "ultralytics/ultralytics"
19
+
20
+
21
+ def extract_classes_and_functions(filepath: Path) -> tuple:
22
+ """Extracts class and function names from a given Python file."""
23
+ content = filepath.read_text()
24
+ class_pattern = r"(?:^|\n)class\s(\w+)(?:\(|:)"
25
+ func_pattern = r"(?:^|\n)def\s(\w+)\("
26
+
27
+ classes = re.findall(class_pattern, content)
28
+ functions = re.findall(func_pattern, content)
29
+
30
+ return classes, functions
31
+
32
+
33
+ def create_markdown(py_filepath: Path, module_path: str, classes: list, functions: list):
34
+ """Creates a Markdown file containing the API reference for the given Python module."""
35
+ md_filepath = py_filepath.with_suffix(".md")
36
+
37
+ # Read existing content and keep header content between first two ---
38
+ header_content = ""
39
+ if md_filepath.exists():
40
+ existing_content = md_filepath.read_text()
41
+ header_parts = existing_content.split("---")
42
+ for part in header_parts:
43
+ if "description:" in part or "comments:" in part:
44
+ header_content += f"---{part}---\n\n"
45
+
46
+ module_name = module_path.replace(".__init__", "")
47
+ module_path = module_path.replace(".", "/")
48
+ url = f"https://github.com/{GITHUB_REPO}/blob/main/{module_path}.py"
49
+ edit = f"https://github.com/{GITHUB_REPO}/edit/main/{module_path}.py"
50
+ title_content = (
51
+ f"# Reference for `{module_path}.py`\n\n"
52
+ f"!!! Note\n\n"
53
+ f" This file is available at [{url}]({url}). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request]({edit}) 🛠️. Thank you 🙏!\n\n"
54
+ )
55
+ md_content = ["<br><br>\n"] + [f"## ::: {module_name}.{class_name}\n\n<br><br>\n" for class_name in classes]
56
+ md_content.extend(f"## ::: {module_name}.{func_name}\n\n<br><br>\n" for func_name in functions)
57
+ md_content = header_content + title_content + "\n".join(md_content)
58
+ if not md_content.endswith("\n"):
59
+ md_content += "\n"
60
+
61
+ md_filepath.parent.mkdir(parents=True, exist_ok=True)
62
+ md_filepath.write_text(md_content)
63
+
64
+ return md_filepath.relative_to(PACKAGE_DIR.parent)
65
+
66
+
67
+ def nested_dict() -> defaultdict:
68
+ """Creates and returns a nested defaultdict."""
69
+ return defaultdict(nested_dict)
70
+
71
+
72
+ def sort_nested_dict(d: dict) -> dict:
73
+ """Sorts a nested dictionary recursively."""
74
+ return {key: sort_nested_dict(value) if isinstance(value, dict) else value for key, value in sorted(d.items())}
75
+
76
+
77
+ def create_nav_menu_yaml(nav_items: list, save: bool = False):
78
+ """Creates a YAML file for the navigation menu based on the provided list of items."""
79
+ nav_tree = nested_dict()
80
+
81
+ for item_str in nav_items:
82
+ item = Path(item_str)
83
+ parts = item.parts
84
+ current_level = nav_tree["reference"]
85
+ for part in parts[2:-1]: # skip the first two parts (docs and reference) and the last part (filename)
86
+ current_level = current_level[part]
87
+
88
+ md_file_name = parts[-1].replace(".md", "")
89
+ current_level[md_file_name] = item
90
+
91
+ nav_tree_sorted = sort_nested_dict(nav_tree)
92
+
93
+ def _dict_to_yaml(d, level=0):
94
+ """Converts a nested dictionary to a YAML-formatted string with indentation."""
95
+ yaml_str = ""
96
+ indent = " " * level
97
+ for k, v in d.items():
98
+ if isinstance(v, dict):
99
+ yaml_str += f"{indent}- {k}:\n{_dict_to_yaml(v, level + 1)}"
100
+ else:
101
+ yaml_str += f"{indent}- {k}: {str(v).replace('docs/en/', '')}\n"
102
+ return yaml_str
103
+
104
+ # Print updated YAML reference section
105
+ print("Scan complete, new mkdocs.yaml reference section is:\n\n", _dict_to_yaml(nav_tree_sorted))
106
+
107
+ # Save new YAML reference section
108
+ if save:
109
+ (PACKAGE_DIR.parent / "nav_menu_updated.yml").write_text(_dict_to_yaml(nav_tree_sorted))
110
+
111
+
112
+ def main():
113
+ """Main function to extract class and function names, create Markdown files, and generate a YAML navigation menu."""
114
+ nav_items = []
115
+
116
+ for py_filepath in PACKAGE_DIR.rglob("*.py"):
117
+ classes, functions = extract_classes_and_functions(py_filepath)
118
+
119
+ if classes or functions:
120
+ py_filepath_rel = py_filepath.relative_to(PACKAGE_DIR)
121
+ md_filepath = REFERENCE_DIR / py_filepath_rel
122
+ module_path = f"{PACKAGE_DIR.name}.{py_filepath_rel.with_suffix('').as_posix().replace('/', '.')}"
123
+ md_rel_filepath = create_markdown(md_filepath, module_path, classes, functions)
124
+ nav_items.append(str(md_rel_filepath))
125
+
126
+ create_nav_menu_yaml(nav_items)
127
+
128
+
129
+ if __name__ == "__main__":
130
+ main()
docs/coming_soon_template.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ description: Discover what's next for Ultralytics with our under-construction page, previewing new, groundbreaking AI and ML features coming soon.
3
+ keywords: Ultralytics, coming soon, under construction, new features, AI updates, ML advancements, YOLO, technology preview
4
+ ---
5
+
6
+ # Under Construction 🏗️🌟
7
+
8
+ Welcome to the Ultralytics "Under Construction" page! Here, we're hard at work developing the next generation of AI and ML innovations. This page serves as a teaser for the exciting updates and new features we're eager to share with you!
9
+
10
+ ## Exciting New Features on the Way 🎉
11
+
12
+ - **Innovative Breakthroughs:** Get ready for advanced features and services that will transform your AI and ML experience.
13
+ - **New Horizons:** Anticipate novel products that redefine AI and ML capabilities.
14
+ - **Enhanced Services:** We're upgrading our services for greater efficiency and user-friendliness.
15
+
16
+ ## Stay Updated 🚧
17
+
18
+ This placeholder page is your first stop for upcoming developments. Keep an eye out for:
19
+
20
+ - **Newsletter:** Subscribe [here](https://ultralytics.com/#newsletter) for the latest news.
21
+ - **Social Media:** Follow us [here](https://www.linkedin.com/company/ultralytics) for updates and teasers.
22
+ - **Blog:** Visit our [blog](https://ultralytics.com/blog) for detailed insights.
23
+
24
+ ## We Value Your Input 🗣️
25
+
26
+ Your feedback shapes our future releases. Share your thoughts and suggestions [here](https://ultralytics.com/contact).
27
+
28
+ ## Thank You, Community! 🌍
29
+
30
+ Your [contributions](https://docs.ultralytics.com/help/contributing) inspire our continuous [innovation](https://github.com/ultralytics/ultralytics). Stay tuned for the big reveal of what's next in AI and ML at Ultralytics!
31
+
32
+ ---
33
+
34
+ Excited for what's coming? Bookmark this page and get ready for a transformative AI and ML journey with Ultralytics! 🛠️🤖
docs/en/CNAME ADDED
@@ -0,0 +1 @@
 
 
1
+ docs.ultralytics.com
docs/en/guides/azureml-quickstart.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Step-by-step Quickstart Guide to Running YOLOv8 Object Detection Models on AzureML for Fast Prototyping and Testing
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Azure Machine Learning, Quickstart Guide, Prototype, Compute Instance, Terminal, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # YOLOv8 🚀 on AzureML
8
+
9
+ ## What is Azure?
10
+
11
+ [Azure](https://azure.microsoft.com/) is Microsoft's cloud computing platform, designed to help organizations move their workloads to the cloud from on-premises data centers. With the full spectrum of cloud services including those for computing, databases, analytics, machine learning, and networking, users can pick and choose from these services to develop and scale new applications, or run existing applications, in the public cloud.
12
+
13
+ ## What is Azure Machine Learning (AzureML)?
14
+
15
+ Azure Machine Learning, commonly referred to as AzureML, is a fully managed cloud service that enables data scientists and developers to efficiently embed predictive analytics into their applications, helping organizations use massive data sets and bring all the benefits of the cloud to machine learning. AzureML offers a variety of services and capabilities aimed at making machine learning accessible, easy to use, and scalable. It provides capabilities like automated machine learning, drag-and-drop model training, as well as a robust Python SDK so that developers can make the most out of their machine learning models.
16
+
17
+ ## How Does AzureML Benefit YOLO Users?
18
+
19
+ For users of YOLO (You Only Look Once), AzureML provides a robust, scalable, and efficient platform to both train and deploy machine learning models. Whether you are looking to run quick prototypes or scale up to handle more extensive data, AzureML's flexible and user-friendly environment offers various tools and services to fit your needs. You can leverage AzureML to:
20
+
21
+ - Easily manage large datasets and computational resources for training.
22
+ - Utilize built-in tools for data preprocessing, feature selection, and model training.
23
+ - Collaborate more efficiently with capabilities for MLOps (Machine Learning Operations), including but not limited to monitoring, auditing, and versioning of models and data.
24
+
25
+ In the subsequent sections, you will find a quickstart guide detailing how to run YOLOv8 object detection models using AzureML, either from a compute terminal or a notebook.
26
+
27
+ ## Prerequisites
28
+
29
+ Before you can get started, make sure you have access to an AzureML workspace. If you don't have one, you can create a new [AzureML workspace](https://learn.microsoft.com/azure/machine-learning/concept-workspace?view=azureml-api-2) by following Azure's official documentation. This workspace acts as a centralized place to manage all AzureML resources.
30
+
31
+ ## Create a compute instance
32
+
33
+ From your AzureML workspace, select Compute > Compute instances > New, select the instance with the resources you need.
34
+
35
+ <p align="center">
36
+ <img width="1280" src="https://github.com/ouphi/ultralytics/assets/17216799/3e92fcc0-a08e-41a4-af81-d289cfe3b8f2" alt="Create Azure Compute Instance">
37
+ </p>
38
+
39
+ ## Quickstart from Terminal
40
+
41
+ Start your compute and open a Terminal:
42
+
43
+ <p align="center">
44
+ <img width="480" src="https://github.com/ouphi/ultralytics/assets/17216799/635152f1-f4a3-4261-b111-d416cb5ef357" alt="Open Terminal">
45
+ </p>
46
+
47
+ ### Create virtualenv
48
+
49
+ Create your conda virtualenv and install pip in it:
50
+
51
+ ```bash
52
+ conda create --name yolov8env -y
53
+ conda activate yolov8env
54
+ conda install pip -y
55
+ ```
56
+
57
+ Install the required dependencies:
58
+
59
+ ```bash
60
+ cd ultralytics
61
+ pip install -r requirements.txt
62
+ pip install ultralytics
63
+ pip install onnx>=1.12.0
64
+ ```
65
+
66
+ ### Perform YOLOv8 tasks
67
+
68
+ Predict:
69
+
70
+ ```bash
71
+ yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
72
+ ```
73
+
74
+ Train a detection model for 10 epochs with an initial learning_rate of 0.01:
75
+
76
+ ```bash
77
+ yolo train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
78
+ ```
79
+
80
+ You can find more [instructions to use the Ultralytics CLI here](../quickstart.md#use-ultralytics-with-cli).
81
+
82
+ ## Quickstart from a Notebook
83
+
84
+ ### Create a new IPython kernel
85
+
86
+ Open the compute Terminal.
87
+
88
+ <p align="center">
89
+ <img width="480" src="https://github.com/ouphi/ultralytics/assets/17216799/635152f1-f4a3-4261-b111-d416cb5ef357" alt="Open Terminal">
90
+ </p>
91
+
92
+ From your compute terminal, you need to create a new ipykernel that will be used by your notebook to manage your dependencies:
93
+
94
+ ```bash
95
+ conda create --name yolov8env -y
96
+ conda activate yolov8env
97
+ conda install pip -y
98
+ conda install ipykernel -y
99
+ python -m ipykernel install --user --name yolov8env --display-name "yolov8env"
100
+ ```
101
+
102
+ Close your terminal and create a new notebook. From your Notebook, you can select the new kernel.
103
+
104
+ Then you can open a Notebook cell and install the required dependencies:
105
+
106
+ ```bash
107
+ %%bash
108
+ source activate yolov8env
109
+ cd ultralytics
110
+ pip install -r requirements.txt
111
+ pip install ultralytics
112
+ pip install onnx>=1.12.0
113
+ ```
114
+
115
+ Note that we need to use the `source activate yolov8env` for all the %%bash cells, to make sure that the %%bash cell uses environment we want.
116
+
117
+ Run some predictions using the [Ultralytics CLI](../quickstart.md#use-ultralytics-with-cli):
118
+
119
+ ```bash
120
+ %%bash
121
+ source activate yolov8env
122
+ yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
123
+ ```
124
+
125
+ Or with the [Ultralytics Python interface](../quickstart.md#use-ultralytics-with-python), for example to train the model:
126
+
127
+ ```python
128
+ from ultralytics import YOLO
129
+
130
+ # Load a model
131
+ model = YOLO("yolov8n.pt") # load an official YOLOv8n model
132
+
133
+ # Use the model
134
+ model.train(data="coco128.yaml", epochs=3) # train the model
135
+ metrics = model.val() # evaluate model performance on the validation set
136
+ results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
137
+ path = model.export(format="onnx") # export the model to ONNX format
138
+ ```
139
+
140
+ You can use either the Ultralytics CLI or Python interface for running YOLOv8 tasks, as described in the terminal section above.
141
+
142
+ By following these steps, you should be able to get YOLOv8 running quickly on AzureML for quick trials. For more advanced uses, you may refer to the full AzureML documentation linked at the beginning of this guide.
143
+
144
+ ## Explore More with AzureML
145
+
146
+ This guide serves as an introduction to get you up and running with YOLOv8 on AzureML. However, it only scratches the surface of what AzureML can offer. To delve deeper and unlock the full potential of AzureML for your machine learning projects, consider exploring the following resources:
147
+
148
+ - [Create a Data Asset](https://learn.microsoft.com/azure/machine-learning/how-to-create-data-assets): Learn how to set up and manage your data assets effectively within the AzureML environment.
149
+ - [Initiate an AzureML Job](https://learn.microsoft.com/azure/machine-learning/how-to-train-model): Get a comprehensive understanding of how to kickstart your machine learning training jobs on AzureML.
150
+ - [Register a Model](https://learn.microsoft.com/azure/machine-learning/how-to-manage-models): Familiarize yourself with model management practices including registration, versioning, and deployment.
151
+ - [Train YOLOv8 with AzureML Python SDK](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azure-machine-learning-python-sdk-8268696be8ba): Explore a step-by-step guide on using the AzureML Python SDK to train your YOLOv8 models.
152
+ - [Train YOLOv8 with AzureML CLI](https://medium.com/@ouphi/how-to-train-the-yolov8-model-with-azureml-and-the-az-cli-73d3c870ba8e): Discover how to utilize the command-line interface for streamlined training and management of YOLOv8 models on AzureML.
docs/en/guides/conda-quickstart.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Comprehensive guide to setting up and using Ultralytics YOLO models in a Conda environment. Learn how to install the package, manage dependencies, and get started with object detection projects.
4
+ keywords: Ultralytics, YOLO, Conda, environment setup, object detection, package installation, deep learning, machine learning, guide
5
+ ---
6
+
7
+ # Conda Quickstart Guide for Ultralytics
8
+
9
+ <p align="center">
10
+ <img width="800" src="https://user-images.githubusercontent.com/26833433/266324397-32119e21-8c86-43e5-a00e-79827d303d10.png" alt="Ultralytics Conda Package Visual">
11
+ </p>
12
+
13
+ This guide provides a comprehensive introduction to setting up a Conda environment for your Ultralytics projects. Conda is an open-source package and environment management system that offers an excellent alternative to pip for installing packages and dependencies. Its isolated environments make it particularly well-suited for data science and machine learning endeavors. For more details, visit the Ultralytics Conda package on [Anaconda](https://anaconda.org/conda-forge/ultralytics) and check out the Ultralytics feedstock repository for package updates on [GitHub](https://github.com/conda-forge/ultralytics-feedstock/).
14
+
15
+ [![Conda Recipe](https://img.shields.io/badge/recipe-ultralytics-green.svg)](https://anaconda.org/conda-forge/ultralytics) [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/ultralytics.svg)](https://anaconda.org/conda-forge/ultralytics) [![Conda Version](https://img.shields.io/conda/vn/conda-forge/ultralytics.svg)](https://anaconda.org/conda-forge/ultralytics) [![Conda Platforms](https://img.shields.io/conda/pn/conda-forge/ultralytics.svg)](https://anaconda.org/conda-forge/ultralytics)
16
+
17
+ ## What You Will Learn
18
+
19
+ - Setting up a Conda environment
20
+ - Installing Ultralytics via Conda
21
+ - Initializing Ultralytics in your environment
22
+ - Using Ultralytics Docker images with Conda
23
+
24
+ ---
25
+
26
+ ## Prerequisites
27
+
28
+ - You should have Anaconda or Miniconda installed on your system. If not, download and install it from [Anaconda](https://www.anaconda.com/) or [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/).
29
+
30
+ ---
31
+
32
+ ## Setting up a Conda Environment
33
+
34
+ First, let's create a new Conda environment. Open your terminal and run the following command:
35
+
36
+ ```bash
37
+ conda create --name ultralytics-env python=3.8 -y
38
+ ```
39
+
40
+ Activate the new environment:
41
+
42
+ ```bash
43
+ conda activate ultralytics-env
44
+ ```
45
+
46
+ ---
47
+
48
+ ## Installing Ultralytics
49
+
50
+ You can install the Ultralytics package from the conda-forge channel. Execute the following command:
51
+
52
+ ```bash
53
+ conda install -c conda-forge ultralytics
54
+ ```
55
+
56
+ ### Note on CUDA Environment
57
+
58
+ If you're working in a CUDA-enabled environment, it's a good practice to install `ultralytics`, `pytorch`, and `pytorch-cuda` together to resolve any conflicts:
59
+
60
+ ```bash
61
+ conda install -c pytorch -c nvidia -c conda-forge pytorch torchvision pytorch-cuda=11.8 ultralytics
62
+ ```
63
+
64
+ ---
65
+
66
+ ## Using Ultralytics
67
+
68
+ With Ultralytics installed, you can now start using its robust features for object detection, instance segmentation, and more. For example, to predict an image, you can run:
69
+
70
+ ```python
71
+ from ultralytics import YOLO
72
+
73
+ model = YOLO('yolov8n.pt') # initialize model
74
+ results = model('path/to/image.jpg') # perform inference
75
+ results[0].show() # display results for the first image
76
+ ```
77
+
78
+ ---
79
+
80
+ ## Ultralytics Conda Docker Image
81
+
82
+ If you prefer using Docker, Ultralytics offers Docker images with a Conda environment included. You can pull these images from [DockerHub](https://hub.docker.com/r/ultralytics/ultralytics).
83
+
84
+ Pull the latest Ultralytics image:
85
+
86
+ ```bash
87
+ # Set image name as a variable
88
+ t=ultralytics/ultralytics:latest-conda
89
+
90
+ # Pull the latest Ultralytics image from Docker Hub
91
+ sudo docker pull $t
92
+ ```
93
+
94
+ Run the image:
95
+
96
+ ```bash
97
+ # Run the Ultralytics image in a container with GPU support
98
+ sudo docker run -it --ipc=host --gpus all $t # all GPUs
99
+ sudo docker run -it --ipc=host --gpus '"device=2,3"' $t # specify GPUs
100
+ ```
101
+
102
+ ---
103
+
104
+ Certainly, you can include the following section in your Conda guide to inform users about speeding up installation using `libmamba`:
105
+
106
+ ---
107
+
108
+ ## Speeding Up Installation with Libmamba
109
+
110
+ If you're looking to [speed up the package installation](https://www.anaconda.com/blog/a-faster-conda-for-a-growing-community) process in Conda, you can opt to use `libmamba`, a fast, cross-platform, and dependency-aware package manager that serves as an alternative solver to Conda's default.
111
+
112
+ ### How to Enable Libmamba
113
+
114
+ To enable `libmamba` as the solver for Conda, you can perform the following steps:
115
+
116
+ 1. First, install the `conda-libmamba-solver` package. This can be skipped if your Conda version is 4.11 or above, as `libmamba` is included by default.
117
+
118
+ ```bash
119
+ conda install conda-libmamba-solver
120
+ ```
121
+
122
+ 2. Next, configure Conda to use `libmamba` as the solver:
123
+
124
+ ```bash
125
+ conda config --set solver libmamba
126
+ ```
127
+
128
+ And that's it! Your Conda installation will now use `libmamba` as the solver, which should result in a faster package installation process.
129
+
130
+ ---
131
+
132
+ Congratulations! You have successfully set up a Conda environment, installed the Ultralytics package, and are now ready to explore its rich functionalities. Feel free to dive deeper into the [Ultralytics documentation](../index.md) for more advanced tutorials and examples.
docs/en/guides/coral-edge-tpu-on-raspberry-pi.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Guide on how to use Ultralytics with a Coral Edge TPU on a Raspberry Pi for increased inference performance.
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Coral, Edge TPU, Raspberry Pi, embedded, edge compute, sbc, accelerator, mobile
5
+ ---
6
+
7
+ # Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8 🚀
8
+
9
+ <p align="center">
10
+ <img width="800" src="https://images.ctfassets.net/2lpsze4g694w/5XK2dV0w55U0TefijPli1H/bf0d119d77faef9a5d2cc0dad2aa4b42/Edge-TPU-USB-Accelerator-and-Pi.jpg?w=800" alt="Raspberry Pi single board computer with USB Edge TPU accelerator">
11
+ </p>
12
+
13
+ ## What is a Coral Edge TPU?
14
+
15
+ The Coral Edge TPU is a compact device that adds an Edge TPU coprocessor to your system. It enables low-power, high-performance ML inference for TensorFlow Lite models. Read more at the [Coral Edge TPU home page](https://coral.ai/products/accelerator).
16
+
17
+ ## Boost Raspberry Pi Model Performance with Coral Edge TPU
18
+
19
+ Many people want to run their models on an embedded or mobile device such as a Raspberry Pi, since they are very power efficient and can be used in many different applications. However, the inference performance on these devices is usually poor even when using formats like [onnx](../integrations/onnx.md) or [openvino](../integrations/openvino.md). The Coral Edge TPU is a great solution to this problem, since it can be used with a Raspberry Pi and accelerate inference performance greatly.
20
+
21
+ ## Edge TPU on Raspberry Pi with TensorFlow Lite (New)⭐
22
+
23
+ The [existing guide](https://coral.ai/docs/accelerator/get-started/) by Coral on how to use the Edge TPU with a Raspberry Pi is outdated, and the current Coral Edge TPU runtime builds do not work with the current TensorFlow Lite runtime versions anymore. In addition to that, Google seems to have completely abandoned the Coral project, and there have not been any updates between 2021 and 2024. This guide will show you how to get the Edge TPU working with the latest versions of the TensorFlow Lite runtime and an updated Coral Edge TPU runtime on a Raspberry Pi single board computer (SBC).
24
+
25
+ ## Prerequisites
26
+
27
+ - [Raspberry Pi 4B](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/) (2GB or more recommended) or [Raspberry Pi 5](https://www.raspberrypi.com/products/raspberry-pi-5/) (Recommended)
28
+ - [Raspberry Pi OS](https://www.raspberrypi.com/software/) Bullseye/Bookworm (64-bit) with desktop (Recommended)
29
+ - [Coral USB Accelerator](https://coral.ai/products/accelerator/)
30
+ - A non-ARM based platform for exporting an Ultralytics PyTorch model
31
+
32
+ ## Installation Walkthrough
33
+
34
+ This guide assumes that you already have a working Raspberry Pi OS install and have installed `ultralytics` and all dependencies. To get `ultralytics` installed, visit the [quickstart guide](../quickstart.md) to get setup before continuing here.
35
+
36
+ ### Installing the Edge TPU runtime
37
+
38
+ First, we need to install the Edge TPU runtime. There are many different versions available, so you need to choose the right version for your operating system.
39
+
40
+ | Raspberry Pi OS | High frequency mode | Version to download |
41
+ |-----------------|:-------------------:|--------------------------------------------|
42
+ | Bullseye 32bit | No | `libedgetpu1-std_ ... .bullseye_armhf.deb` |
43
+ | Bullseye 64bit | No | `libedgetpu1-std_ ... .bullseye_arm64.deb` |
44
+ | Bullseye 32bit | Yes | `libedgetpu1-max_ ... .bullseye_armhf.deb` |
45
+ | Bullseye 64bit | Yes | `libedgetpu1-max_ ... .bullseye_arm64.deb` |
46
+ | Bookworm 32bit | No | `libedgetpu1-std_ ... .bookworm_armhf.deb` |
47
+ | Bookworm 64bit | No | `libedgetpu1-std_ ... .bookworm_arm64.deb` |
48
+ | Bookworm 32bit | Yes | `libedgetpu1-max_ ... .bookworm_armhf.deb` |
49
+ | Bookworm 64bit | Yes | `libedgetpu1-max_ ... .bookworm_arm64.deb` |
50
+
51
+ [Download the latest version from here](https://github.com/feranick/libedgetpu/releases).
52
+
53
+ After downloading the file, you can install it with the following command:
54
+
55
+ ```bash
56
+ sudo dpkg -i path/to/package.deb
57
+ ```
58
+
59
+ After installing the runtime, you need to plug in your Coral Edge TPU into a USB 3.0 port on your Raspberry Pi. This is because, according to the official guide, a new `udev` rule needs to take effect after installation.
60
+
61
+ ???+ warning "Important"
62
+
63
+ If you already have the Coral Edge TPU runtime installed, uninstall it using the following command.
64
+
65
+ ```bash
66
+ # If you installed the standard version
67
+ sudo apt remove libedgetpu1-std
68
+
69
+ # If you installed the high frequency version
70
+ sudo apt remove libedgetpu1-max
71
+ ```
72
+
73
+ ## Export your model to a Edge TPU compatible model
74
+
75
+ To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
76
+
77
+ !!! Exporting the model
78
+
79
+ === "Python"
80
+
81
+ ```python
82
+ from ultralytics import YOLO
83
+
84
+ # Load a model
85
+ model = YOLO('path/to/model.pt') # Load a official model or custom model
86
+
87
+ # Export the model
88
+ model.export(format='edgetpu')
89
+ ```
90
+
91
+ === "CLI"
92
+
93
+ ```bash
94
+ yolo export model=path/to/model.pt format=edgetpu # Export a official model or custom model
95
+ ```
96
+
97
+ The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`.
98
+
99
+ ## Running the model
100
+
101
+ After exporting your model, you can run inference with it using the following code:
102
+
103
+ !!! Running the model
104
+
105
+ === "Python"
106
+
107
+ ```python
108
+ from ultralytics import YOLO
109
+
110
+ # Load a model
111
+ model = YOLO('path/to/edgetpu_model.tflite') # Load a official model or custom model
112
+
113
+ # Run Prediction
114
+ model.predict("path/to/source.png")
115
+ ```
116
+
117
+ === "CLI"
118
+
119
+ ```bash
120
+ yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load a official model or custom model
121
+ ```
122
+
123
+ Find comprehensive information on the [Predict](../modes/predict.md) page for full prediction mode details.
124
+
125
+ ???+ warning "Important"
126
+
127
+ You should run the model using `tflite-runtime` and not `tensorflow`.
128
+ If `tensorflow` is installed, uninstall tensorflow with the following command:
129
+
130
+ ```bash
131
+ pip uninstall tensorflow tensorflow-aarch64
132
+ ```
133
+
134
+ Then install/update `tflite-runtime`:
135
+
136
+ ```
137
+ pip install -U tflite-runtime
138
+ ```
139
+
140
+ If you want a `tflite-runtime` wheel for `tensorflow` 2.15.0 download it from [here](https://github.com/feranick/TFlite-builds/releases) and install it using `pip` or your package manager of choice.
docs/en/guides/distance-calculation.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Distance Calculation Using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Distance Calculation, Object Tracking, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # Distance Calculation using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Distance Calculation?
10
+
11
+ Measuring the gap between two objects is known as distance calculation within a specified space. In the case of [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), the bounding box centroid is employed to calculate the distance for bounding boxes highlighted by the user.
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/LE8am1QoVn4"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Distance Calculation using Ultralytics YOLOv8
22
+ </p>
23
+
24
+ ## Visuals
25
+
26
+ | Distance Calculation using Ultralytics YOLOv8 |
27
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------:|
28
+ | ![Ultralytics YOLOv8 Distance Calculation](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/6b6b735d-3c49-4b84-a022-2bf6e3c72f8b) |
29
+
30
+ ## Advantages of Distance Calculation?
31
+
32
+ - **Localization Precision:** Enhances accurate spatial positioning in computer vision tasks.
33
+ - **Size Estimation:** Allows estimation of physical sizes for better contextual understanding.
34
+ - **Scene Understanding:** Contributes to a 3D understanding of the environment for improved decision-making.
35
+
36
+ ???+ tip "Distance Calculation"
37
+
38
+ - Click on any two bounding boxes with Left Mouse click for distance calculation
39
+
40
+ !!! Example "Distance Calculation using YOLOv8 Example"
41
+
42
+ === "Video Stream"
43
+
44
+ ```python
45
+ from ultralytics import YOLO
46
+ from ultralytics.solutions import distance_calculation
47
+ import cv2
48
+
49
+ model = YOLO("yolov8n.pt")
50
+ names = model.model.names
51
+
52
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
53
+ assert cap.isOpened(), "Error reading video file"
54
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
55
+
56
+ # Video writer
57
+ video_writer = cv2.VideoWriter("distance_calculation.avi",
58
+ cv2.VideoWriter_fourcc(*'mp4v'),
59
+ fps,
60
+ (w, h))
61
+
62
+ # Init distance-calculation obj
63
+ dist_obj = distance_calculation.DistanceCalculation()
64
+ dist_obj.set_args(names=names, view_img=True)
65
+
66
+ while cap.isOpened():
67
+ success, im0 = cap.read()
68
+ if not success:
69
+ print("Video frame is empty or video processing has been successfully completed.")
70
+ break
71
+
72
+ tracks = model.track(im0, persist=True, show=False)
73
+ im0 = dist_obj.start_process(im0, tracks)
74
+ video_writer.write(im0)
75
+
76
+ cap.release()
77
+ video_writer.release()
78
+ cv2.destroyAllWindows()
79
+
80
+ ```
81
+
82
+ ???+ tip "Note"
83
+
84
+ - Mouse Right Click will delete all drawn points
85
+ - Mouse Left Click can be used to draw points
86
+
87
+ ### Optional Arguments `set_args`
88
+
89
+ | Name | Type | Default | Description |
90
+ |------------------|--------|-----------------|--------------------------------------------------------|
91
+ | `names` | `dict` | `None` | Classes names |
92
+ | `view_img` | `bool` | `False` | Display frames with counts |
93
+ | `line_thickness` | `int` | `2` | Increase bounding boxes thickness |
94
+ | `line_color` | `RGB` | `(255, 255, 0)` | Line Color for centroids mapping on two bounding boxes |
95
+ | `centroid_color` | `RGB` | `(255, 0, 255)` | Centroid color for each bounding box |
96
+
97
+ ### Arguments `model.track`
98
+
99
+ | Name | Type | Default | Description |
100
+ |-----------|---------|----------------|-------------------------------------------------------------|
101
+ | `source` | `im0` | `None` | source directory for images or videos |
102
+ | `persist` | `bool` | `False` | persisting tracks between frames |
103
+ | `tracker` | `str` | `botsort.yaml` | Tracking method 'bytetrack' or 'botsort' |
104
+ | `conf` | `float` | `0.3` | Confidence Threshold |
105
+ | `iou` | `float` | `0.5` | IOU Threshold |
106
+ | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
107
+ | `verbose` | `bool` | `True` | Display the object tracking results |
docs/en/guides/docker-quickstart.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Complete guide to setting up and using Ultralytics YOLO models with Docker. Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers.
4
+ keywords: Ultralytics, YOLO, Docker, GPU, containerization, object detection, package installation, deep learning, machine learning, guide
5
+ ---
6
+
7
+ # Docker Quickstart Guide for Ultralytics
8
+
9
+ <p align="center">
10
+ <img width="800" src="https://user-images.githubusercontent.com/26833433/270173601-fc7011bd-e67c-452f-a31a-aa047dcd2771.png" alt="Ultralytics Docker Package Visual">
11
+ </p>
12
+
13
+ This guide serves as a comprehensive introduction to setting up a Docker environment for your Ultralytics projects. [Docker](https://docker.com/) is a platform for developing, shipping, and running applications in containers. It is particularly beneficial for ensuring that the software will always run the same, regardless of where it's deployed. For more details, visit the Ultralytics Docker repository on [Docker Hub](https://hub.docker.com/r/ultralytics/ultralytics).
14
+
15
+ [![Docker Pulls](https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker)](https://hub.docker.com/r/ultralytics/ultralytics)
16
+
17
+ ## What You Will Learn
18
+
19
+ - Setting up Docker with NVIDIA support
20
+ - Installing Ultralytics Docker images
21
+ - Running Ultralytics in a Docker container
22
+ - Mounting local directories into the container
23
+
24
+ ---
25
+
26
+ ## Prerequisites
27
+
28
+ - Make sure Docker is installed on your system. If not, you can download and install it from [Docker's website](https://www.docker.com/products/docker-desktop).
29
+ - Ensure that your system has an NVIDIA GPU and NVIDIA drivers are installed.
30
+
31
+ ---
32
+
33
+ ## Setting up Docker with NVIDIA Support
34
+
35
+ First, verify that the NVIDIA drivers are properly installed by running:
36
+
37
+ ```bash
38
+ nvidia-smi
39
+ ```
40
+
41
+ ### Installing NVIDIA Docker Runtime
42
+
43
+ Now, let's install the NVIDIA Docker runtime to enable GPU support in Docker containers:
44
+
45
+ ```bash
46
+ # Add NVIDIA package repositories
47
+ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
48
+ distribution=$(lsb_release -cs)
49
+ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
50
+
51
+ # Install NVIDIA Docker runtime
52
+ sudo apt-get update
53
+ sudo apt-get install -y nvidia-docker2
54
+
55
+ # Restart Docker service to apply changes
56
+ sudo systemctl restart docker
57
+ ```
58
+
59
+ ### Verify NVIDIA Runtime with Docker
60
+
61
+ Run `docker info | grep -i runtime` to ensure that `nvidia` appears in the list of runtimes:
62
+
63
+ ```bash
64
+ docker info | grep -i runtime
65
+ ```
66
+
67
+ ---
68
+
69
+ ## Installing Ultralytics Docker Images
70
+
71
+ Ultralytics offers several Docker images optimized for various platforms and use-cases:
72
+
73
+ - **Dockerfile:** GPU image, ideal for training.
74
+ - **Dockerfile-arm64:** For ARM64 architecture, suitable for devices like [Raspberry Pi](raspberry-pi.md).
75
+ - **Dockerfile-cpu:** CPU-only version for inference and non-GPU environments.
76
+ - **Dockerfile-jetson:** Optimized for NVIDIA Jetson devices.
77
+ - **Dockerfile-python:** Minimal Python environment for lightweight applications.
78
+ - **Dockerfile-conda:** Includes [Miniconda3](https://docs.conda.io/projects/miniconda/en/latest/) and Ultralytics package installed via Conda.
79
+
80
+ To pull the latest image:
81
+
82
+ ```bash
83
+ # Set image name as a variable
84
+ t=ultralytics/ultralytics:latest
85
+
86
+ # Pull the latest Ultralytics image from Docker Hub
87
+ sudo docker pull $t
88
+ ```
89
+
90
+ ---
91
+
92
+ ## Running Ultralytics in Docker Container
93
+
94
+ Here's how to execute the Ultralytics Docker container:
95
+
96
+ ```bash
97
+ # Run with all GPUs
98
+ sudo docker run -it --ipc=host --gpus all $t
99
+
100
+ # Run specifying which GPUs to use
101
+ sudo docker run -it --ipc=host --gpus '"device=2,3"' $t
102
+ ```
103
+
104
+ The `-it` flag assigns a pseudo-TTY and keeps stdin open, allowing you to interact with the container. The `--ipc=host` flag enables sharing of host's IPC namespace, essential for sharing memory between processes. The `--gpus` flag allows the container to access the host's GPUs.
105
+
106
+ ### Note on File Accessibility
107
+
108
+ To work with files on your local machine within the container, you can use Docker volumes:
109
+
110
+ ```bash
111
+ # Mount a local directory into the container
112
+ sudo docker run -it --ipc=host --gpus all -v /path/on/host:/path/in/container $t
113
+ ```
114
+
115
+ Replace `/path/on/host` with the directory path on your local machine and `/path/in/container` with the desired path inside the Docker container.
116
+
117
+ ---
118
+
119
+ Congratulations! You're now set up to use Ultralytics with Docker and ready to take advantage of its powerful capabilities. For alternate installation methods, feel free to explore the [Ultralytics quickstart documentation](../quickstart.md).
docs/en/guides/heatmaps.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Advanced Data Visualization with Ultralytics YOLOv8 Heatmaps
4
+ keywords: Ultralytics, YOLOv8, Advanced Data Visualization, Heatmap Technology, Object Detection and Tracking, Jupyter Notebook, Python SDK, Command Line Interface
5
+ ---
6
+
7
+ # Advanced Data Visualization: Heatmaps using Ultralytics YOLOv8 🚀
8
+
9
+ ## Introduction to Heatmaps
10
+
11
+ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) transforms complex data into a vibrant, color-coded matrix. This visual tool employs a spectrum of colors to represent varying data values, where warmer hues indicate higher intensities and cooler tones signify lower values. Heatmaps excel in visualizing intricate data patterns, correlations, and anomalies, offering an accessible and engaging approach to data interpretation across diverse domains.
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/4ezde5-nZZw"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Heatmaps using Ultralytics YOLOv8
22
+ </p>
23
+
24
+ ## Why Choose Heatmaps for Data Analysis?
25
+
26
+ - **Intuitive Data Distribution Visualization:** Heatmaps simplify the comprehension of data concentration and distribution, converting complex datasets into easy-to-understand visual formats.
27
+ - **Efficient Pattern Detection:** By visualizing data in heatmap format, it becomes easier to spot trends, clusters, and outliers, facilitating quicker analysis and insights.
28
+ - **Enhanced Spatial Analysis and Decision-Making:** Heatmaps are instrumental in illustrating spatial relationships, aiding in decision-making processes in sectors such as business intelligence, environmental studies, and urban planning.
29
+
30
+ ## Real World Applications
31
+
32
+ | Transportation | Retail |
33
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------:|
34
+ | ![Ultralytics YOLOv8 Transportation Heatmap](https://github.com/RizwanMunawar/ultralytics/assets/62513924/288d7053-622b-4452-b4e4-1f41aeb764aa) | ![Ultralytics YOLOv8 Retail Heatmap](https://github.com/RizwanMunawar/ultralytics/assets/62513924/edef75ad-50a7-4c0a-be4a-a66cdfc12802) |
35
+ | Ultralytics YOLOv8 Transportation Heatmap | Ultralytics YOLOv8 Retail Heatmap |
36
+
37
+ !!! tip "Heatmap Configuration"
38
+
39
+ - `heatmap_alpha`: Ensure this value is within the range (0.0 - 1.0).
40
+ - `decay_factor`: Used for removing heatmap after an object is no longer in the frame, its value should also be in the range (0.0 - 1.0).
41
+
42
+ !!! Example "Heatmaps using Ultralytics YOLOv8 Example"
43
+
44
+ === "Heatmap"
45
+
46
+ ```python
47
+ from ultralytics import YOLO
48
+ from ultralytics.solutions import heatmap
49
+ import cv2
50
+
51
+ model = YOLO("yolov8n.pt")
52
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
53
+ assert cap.isOpened(), "Error reading video file"
54
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
55
+
56
+ # Video writer
57
+ video_writer = cv2.VideoWriter("heatmap_output.avi",
58
+ cv2.VideoWriter_fourcc(*'mp4v'),
59
+ fps,
60
+ (w, h))
61
+
62
+ # Init heatmap
63
+ heatmap_obj = heatmap.Heatmap()
64
+ heatmap_obj.set_args(colormap=cv2.COLORMAP_PARULA,
65
+ imw=w,
66
+ imh=h,
67
+ view_img=True,
68
+ shape="circle")
69
+
70
+ while cap.isOpened():
71
+ success, im0 = cap.read()
72
+ if not success:
73
+ print("Video frame is empty or video processing has been successfully completed.")
74
+ break
75
+ tracks = model.track(im0, persist=True, show=False)
76
+
77
+ im0 = heatmap_obj.generate_heatmap(im0, tracks)
78
+ video_writer.write(im0)
79
+
80
+ cap.release()
81
+ video_writer.release()
82
+ cv2.destroyAllWindows()
83
+
84
+ ```
85
+
86
+ === "Line Counting"
87
+
88
+ ```python
89
+ from ultralytics import YOLO
90
+ from ultralytics.solutions import heatmap
91
+ import cv2
92
+
93
+ model = YOLO("yolov8n.pt")
94
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
95
+ assert cap.isOpened(), "Error reading video file"
96
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
97
+
98
+ # Video writer
99
+ video_writer = cv2.VideoWriter("heatmap_output.avi",
100
+ cv2.VideoWriter_fourcc(*'mp4v'),
101
+ fps,
102
+ (w, h))
103
+
104
+ line_points = [(20, 400), (1080, 404)] # line for object counting
105
+
106
+ # Init heatmap
107
+ heatmap_obj = heatmap.Heatmap()
108
+ heatmap_obj.set_args(colormap=cv2.COLORMAP_PARULA,
109
+ imw=w,
110
+ imh=h,
111
+ view_img=True,
112
+ shape="circle",
113
+ count_reg_pts=line_points)
114
+
115
+ while cap.isOpened():
116
+ success, im0 = cap.read()
117
+ if not success:
118
+ print("Video frame is empty or video processing has been successfully completed.")
119
+ break
120
+ tracks = model.track(im0, persist=True, show=False)
121
+
122
+ im0 = heatmap_obj.generate_heatmap(im0, tracks)
123
+ video_writer.write(im0)
124
+
125
+ cap.release()
126
+ video_writer.release()
127
+ cv2.destroyAllWindows()
128
+ ```
129
+
130
+ === "Region Counting"
131
+
132
+ ```python
133
+ from ultralytics import YOLO
134
+ from ultralytics.solutions import heatmap
135
+ import cv2
136
+
137
+ model = YOLO("yolov8n.pt")
138
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
139
+ assert cap.isOpened(), "Error reading video file"
140
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
141
+
142
+ # Video writer
143
+ video_writer = cv2.VideoWriter("heatmap_output.avi",
144
+ cv2.VideoWriter_fourcc(*'mp4v'),
145
+ fps,
146
+ (w, h))
147
+
148
+ # Define region points
149
+ region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
150
+
151
+ # Init heatmap
152
+ heatmap_obj = heatmap.Heatmap()
153
+ heatmap_obj.set_args(colormap=cv2.COLORMAP_PARULA,
154
+ imw=w,
155
+ imh=h,
156
+ view_img=True,
157
+ shape="circle",
158
+ count_reg_pts=region_points)
159
+
160
+ while cap.isOpened():
161
+ success, im0 = cap.read()
162
+ if not success:
163
+ print("Video frame is empty or video processing has been successfully completed.")
164
+ break
165
+ tracks = model.track(im0, persist=True, show=False)
166
+
167
+ im0 = heatmap_obj.generate_heatmap(im0, tracks)
168
+ video_writer.write(im0)
169
+
170
+ cap.release()
171
+ video_writer.release()
172
+ cv2.destroyAllWindows()
173
+ ```
174
+
175
+ === "Im0"
176
+
177
+ ```python
178
+ from ultralytics import YOLO
179
+ from ultralytics.solutions import heatmap
180
+ import cv2
181
+
182
+ model = YOLO("yolov8s.pt") # YOLOv8 custom/pretrained model
183
+
184
+ im0 = cv2.imread("path/to/image.png") # path to image file
185
+ h, w = im0.shape[:2] # image height and width
186
+
187
+ # Heatmap Init
188
+ heatmap_obj = heatmap.Heatmap()
189
+ heatmap_obj.set_args(colormap=cv2.COLORMAP_PARULA,
190
+ imw=w,
191
+ imh=h,
192
+ view_img=True,
193
+ shape="circle")
194
+
195
+ results = model.track(im0, persist=True)
196
+ im0 = heatmap_obj.generate_heatmap(im0, tracks=results)
197
+ cv2.imwrite("ultralytics_output.png", im0)
198
+ ```
199
+
200
+ === "Specific Classes"
201
+
202
+ ```python
203
+ from ultralytics import YOLO
204
+ from ultralytics.solutions import heatmap
205
+ import cv2
206
+
207
+ model = YOLO("yolov8n.pt")
208
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
209
+ assert cap.isOpened(), "Error reading video file"
210
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
211
+
212
+ # Video writer
213
+ video_writer = cv2.VideoWriter("heatmap_output.avi",
214
+ cv2.VideoWriter_fourcc(*'mp4v'),
215
+ fps,
216
+ (w, h))
217
+
218
+ classes_for_heatmap = [0, 2] # classes for heatmap
219
+
220
+ # Init heatmap
221
+ heatmap_obj = heatmap.Heatmap()
222
+ heatmap_obj.set_args(colormap=cv2.COLORMAP_PARULA,
223
+ imw=w,
224
+ imh=h,
225
+ view_img=True,
226
+ shape="circle")
227
+
228
+ while cap.isOpened():
229
+ success, im0 = cap.read()
230
+ if not success:
231
+ print("Video frame is empty or video processing has been successfully completed.")
232
+ break
233
+ tracks = model.track(im0, persist=True, show=False,
234
+ classes=classes_for_heatmap)
235
+
236
+ im0 = heatmap_obj.generate_heatmap(im0, tracks)
237
+ video_writer.write(im0)
238
+
239
+ cap.release()
240
+ video_writer.release()
241
+ cv2.destroyAllWindows()
242
+ ```
243
+
244
+ ### Arguments `set_args`
245
+
246
+ | Name | Type | Default | Description |
247
+ |-----------------------|----------------|-------------------|-----------------------------------------------------------|
248
+ | `view_img` | `bool` | `False` | Display the frame with heatmap |
249
+ | `colormap` | `cv2.COLORMAP` | `None` | cv2.COLORMAP for heatmap |
250
+ | `imw` | `int` | `None` | Width of Heatmap |
251
+ | `imh` | `int` | `None` | Height of Heatmap |
252
+ | `heatmap_alpha` | `float` | `0.5` | Heatmap alpha value |
253
+ | `count_reg_pts` | `list` | `None` | Object counting region points |
254
+ | `count_txt_thickness` | `int` | `2` | Count values text size |
255
+ | `count_txt_color` | `RGB Color` | `(0, 0, 0)` | Foreground color for Object counts text |
256
+ | `count_color` | `RGB Color` | `(255, 255, 255)` | Background color for Object counts text |
257
+ | `count_reg_color` | `RGB Color` | `(255, 0, 255)` | Counting region color |
258
+ | `region_thickness` | `int` | `5` | Counting region thickness value |
259
+ | `decay_factor` | `float` | `0.99` | Decay factor for heatmap area removal after specific time |
260
+ | `shape` | `str` | `circle` | Heatmap shape for display "rect" or "circle" supported |
261
+ | `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter |
262
+
263
+ ### Arguments `model.track`
264
+
265
+ | Name | Type | Default | Description |
266
+ |-----------|---------|----------------|-------------------------------------------------------------|
267
+ | `source` | `im0` | `None` | source directory for images or videos |
268
+ | `persist` | `bool` | `False` | persisting tracks between frames |
269
+ | `tracker` | `str` | `botsort.yaml` | Tracking method 'bytetrack' or 'botsort' |
270
+ | `conf` | `float` | `0.3` | Confidence Threshold |
271
+ | `iou` | `float` | `0.5` | IOU Threshold |
272
+ | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
273
+
274
+ ### Heatmap COLORMAPs
275
+
276
+ | Colormap Name | Description |
277
+ |---------------------------------|----------------------------------------|
278
+ | `cv::COLORMAP_AUTUMN` | Autumn color map |
279
+ | `cv::COLORMAP_BONE` | Bone color map |
280
+ | `cv::COLORMAP_JET` | Jet color map |
281
+ | `cv::COLORMAP_WINTER` | Winter color map |
282
+ | `cv::COLORMAP_RAINBOW` | Rainbow color map |
283
+ | `cv::COLORMAP_OCEAN` | Ocean color map |
284
+ | `cv::COLORMAP_SUMMER` | Summer color map |
285
+ | `cv::COLORMAP_SPRING` | Spring color map |
286
+ | `cv::COLORMAP_COOL` | Cool color map |
287
+ | `cv::COLORMAP_HSV` | HSV (Hue, Saturation, Value) color map |
288
+ | `cv::COLORMAP_PINK` | Pink color map |
289
+ | `cv::COLORMAP_HOT` | Hot color map |
290
+ | `cv::COLORMAP_PARULA` | Parula color map |
291
+ | `cv::COLORMAP_MAGMA` | Magma color map |
292
+ | `cv::COLORMAP_INFERNO` | Inferno color map |
293
+ | `cv::COLORMAP_PLASMA` | Plasma color map |
294
+ | `cv::COLORMAP_VIRIDIS` | Viridis color map |
295
+ | `cv::COLORMAP_CIVIDIS` | Cividis color map |
296
+ | `cv::COLORMAP_TWILIGHT` | Twilight color map |
297
+ | `cv::COLORMAP_TWILIGHT_SHIFTED` | Shifted Twilight color map |
298
+ | `cv::COLORMAP_TURBO` | Turbo color map |
299
+ | `cv::COLORMAP_DEEPGREEN` | Deep Green color map |
300
+
301
+ These colormaps are commonly used for visualizing data with different color representations.
docs/en/guides/hyperparameter-tuning.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Dive into hyperparameter tuning in Ultralytics YOLO models. Learn how to optimize performance using the Tuner class and genetic evolution.
4
+ keywords: Ultralytics, YOLO, Hyperparameter Tuning, Tuner Class, Genetic Evolution, Optimization
5
+ ---
6
+
7
+ # Ultralytics YOLO Hyperparameter Tuning Guide
8
+
9
+ ## Introduction
10
+
11
+ Hyperparameter tuning is not just a one-time set-up but an iterative process aimed at optimizing the machine learning model's performance metrics, such as accuracy, precision, and recall. In the context of Ultralytics YOLO, these hyperparameters could range from learning rate to architectural details, such as the number of layers or types of activation functions used.
12
+
13
+ ### What are Hyperparameters?
14
+
15
+ Hyperparameters are high-level, structural settings for the algorithm. They are set prior to the training phase and remain constant during it. Here are some commonly tuned hyperparameters in Ultralytics YOLO:
16
+
17
+ - **Learning Rate** `lr0`: Determines the step size at each iteration while moving towards a minimum in the loss function.
18
+ - **Batch Size** `batch`: Number of images processed simultaneously in a forward pass.
19
+ - **Number of Epochs** `epochs`: An epoch is one complete forward and backward pass of all the training examples.
20
+ - **Architecture Specifics**: Such as channel counts, number of layers, types of activation functions, etc.
21
+
22
+ <p align="center">
23
+ <img width="640" src="https://user-images.githubusercontent.com/26833433/263858934-4f109a2f-82d9-4d08-8bd6-6fd1ff520bcd.png" alt="Hyperparameter Tuning Visual">
24
+ </p>
25
+
26
+ For a full list of augmentation hyperparameters used in YOLOv8 please refer to the [configurations page](../usage/cfg.md#augmentation-settings).
27
+
28
+ ### Genetic Evolution and Mutation
29
+
30
+ Ultralytics YOLO uses genetic algorithms to optimize hyperparameters. Genetic algorithms are inspired by the mechanism of natural selection and genetics.
31
+
32
+ - **Mutation**: In the context of Ultralytics YOLO, mutation helps in locally searching the hyperparameter space by applying small, random changes to existing hyperparameters, producing new candidates for evaluation.
33
+ - **Crossover**: Although crossover is a popular genetic algorithm technique, it is not currently used in Ultralytics YOLO for hyperparameter tuning. The focus is mainly on mutation for generating new hyperparameter sets.
34
+
35
+ ## Preparing for Hyperparameter Tuning
36
+
37
+ Before you begin the tuning process, it's important to:
38
+
39
+ 1. **Identify the Metrics**: Determine the metrics you will use to evaluate the model's performance. This could be AP50, F1-score, or others.
40
+ 2. **Set the Tuning Budget**: Define how much computational resources you're willing to allocate. Hyperparameter tuning can be computationally intensive.
41
+
42
+ ## Steps Involved
43
+
44
+ ### Initialize Hyperparameters
45
+
46
+ Start with a reasonable set of initial hyperparameters. This could either be the default hyperparameters set by Ultralytics YOLO or something based on your domain knowledge or previous experiments.
47
+
48
+ ### Mutate Hyperparameters
49
+
50
+ Use the `_mutate` method to produce a new set of hyperparameters based on the existing set.
51
+
52
+ ### Train Model
53
+
54
+ Training is performed using the mutated set of hyperparameters. The training performance is then assessed.
55
+
56
+ ### Evaluate Model
57
+
58
+ Use metrics like AP50, F1-score, or custom metrics to evaluate the model's performance.
59
+
60
+ ### Log Results
61
+
62
+ It's crucial to log both the performance metrics and the corresponding hyperparameters for future reference.
63
+
64
+ ### Repeat
65
+
66
+ The process is repeated until either the set number of iterations is reached or the performance metric is satisfactory.
67
+
68
+ ## Usage Example
69
+
70
+ Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
71
+
72
+ !!! Example
73
+
74
+ === "Python"
75
+
76
+ ```python
77
+ from ultralytics import YOLO
78
+
79
+ # Initialize the YOLO model
80
+ model = YOLO('yolov8n.pt')
81
+
82
+ # Tune hyperparameters on COCO8 for 30 epochs
83
+ model.tune(data='coco8.yaml', epochs=30, iterations=300, optimizer='AdamW', plots=False, save=False, val=False)
84
+ ```
85
+
86
+ ## Results
87
+
88
+ After you've successfully completed the hyperparameter tuning process, you will obtain several files and directories that encapsulate the results of the tuning. The following describes each:
89
+
90
+ ### File Structure
91
+
92
+ Here's what the directory structure of the results will look like. Training directories like `train1/` contain individual tuning iterations, i.e. one model trained with one set of hyperparameters. The `tune/` directory contains tuning results from all the individual model trainings:
93
+
94
+ ```plaintext
95
+ runs/
96
+ └── detect/
97
+ ├── train1/
98
+ ├── train2/
99
+ ├── ...
100
+ └── tune/
101
+ ├── best_hyperparameters.yaml
102
+ ├── best_fitness.png
103
+ ├── tune_results.csv
104
+ ├── tune_scatter_plots.png
105
+ └── weights/
106
+ ├── last.pt
107
+ └── best.pt
108
+ ```
109
+
110
+ ### File Descriptions
111
+
112
+ #### best_hyperparameters.yaml
113
+
114
+ This YAML file contains the best-performing hyperparameters found during the tuning process. You can use this file to initialize future trainings with these optimized settings.
115
+
116
+ - **Format**: YAML
117
+ - **Usage**: Hyperparameter results
118
+ - **Example**:
119
+ ```yaml
120
+ # 558/900 iterations complete ✅ (45536.81s)
121
+ # Results saved to /usr/src/ultralytics/runs/detect/tune
122
+ # Best fitness=0.64297 observed at iteration 498
123
+ # Best fitness metrics are {'metrics/precision(B)': 0.87247, 'metrics/recall(B)': 0.71387, 'metrics/mAP50(B)': 0.79106, 'metrics/mAP50-95(B)': 0.62651, 'val/box_loss': 2.79884, 'val/cls_loss': 2.72386, 'val/dfl_loss': 0.68503, 'fitness': 0.64297}
124
+ # Best fitness model is /usr/src/ultralytics/runs/detect/train498
125
+ # Best fitness hyperparameters are printed below.
126
+
127
+ lr0: 0.00269
128
+ lrf: 0.00288
129
+ momentum: 0.73375
130
+ weight_decay: 0.00015
131
+ warmup_epochs: 1.22935
132
+ warmup_momentum: 0.1525
133
+ box: 18.27875
134
+ cls: 1.32899
135
+ dfl: 0.56016
136
+ hsv_h: 0.01148
137
+ hsv_s: 0.53554
138
+ hsv_v: 0.13636
139
+ degrees: 0.0
140
+ translate: 0.12431
141
+ scale: 0.07643
142
+ shear: 0.0
143
+ perspective: 0.0
144
+ flipud: 0.0
145
+ fliplr: 0.08631
146
+ mosaic: 0.42551
147
+ mixup: 0.0
148
+ copy_paste: 0.0
149
+ ```
150
+
151
+ #### best_fitness.png
152
+
153
+ This is a plot displaying fitness (typically a performance metric like AP50) against the number of iterations. It helps you visualize how well the genetic algorithm performed over time.
154
+
155
+ - **Format**: PNG
156
+ - **Usage**: Performance visualization
157
+
158
+ <p align="center">
159
+ <img width="640" src="https://user-images.githubusercontent.com/26833433/266847423-9d0aea13-d5c4-4771-b06e-0b817a498260.png" alt="Hyperparameter Tuning Fitness vs Iteration">
160
+ </p>
161
+
162
+ #### tune_results.csv
163
+
164
+ A CSV file containing detailed results of each iteration during the tuning. Each row in the file represents one iteration, and it includes metrics like fitness score, precision, recall, as well as the hyperparameters used.
165
+
166
+ - **Format**: CSV
167
+ - **Usage**: Per-iteration results tracking.
168
+ - **Example**:
169
+ ```csv
170
+ fitness,lr0,lrf,momentum,weight_decay,warmup_epochs,warmup_momentum,box,cls,dfl,hsv_h,hsv_s,hsv_v,degrees,translate,scale,shear,perspective,flipud,fliplr,mosaic,mixup,copy_paste
171
+ 0.05021,0.01,0.01,0.937,0.0005,3.0,0.8,7.5,0.5,1.5,0.015,0.7,0.4,0.0,0.1,0.5,0.0,0.0,0.0,0.5,1.0,0.0,0.0
172
+ 0.07217,0.01003,0.00967,0.93897,0.00049,2.79757,0.81075,7.5,0.50746,1.44826,0.01503,0.72948,0.40658,0.0,0.0987,0.4922,0.0,0.0,0.0,0.49729,1.0,0.0,0.0
173
+ 0.06584,0.01003,0.00855,0.91009,0.00073,3.42176,0.95,8.64301,0.54594,1.72261,0.01503,0.59179,0.40658,0.0,0.0987,0.46955,0.0,0.0,0.0,0.49729,0.80187,0.0,0.0
174
+ ```
175
+
176
+ #### tune_scatter_plots.png
177
+
178
+ This file contains scatter plots generated from `tune_results.csv`, helping you visualize relationships between different hyperparameters and performance metrics. Note that hyperparameters initialized to 0 will not be tuned, such as `degrees` and `shear` below.
179
+
180
+ - **Format**: PNG
181
+ - **Usage**: Exploratory data analysis
182
+
183
+ <p align="center">
184
+ <img width="1000" src="https://user-images.githubusercontent.com/26833433/266847488-ec382f3d-79bc-4087-a0e0-42fb8b62cad2.png" alt="Hyperparameter Tuning Scatter Plots">
185
+ </p>
186
+
187
+ #### weights/
188
+
189
+ This directory contains the saved PyTorch models for the last and the best iterations during the hyperparameter tuning process.
190
+
191
+ - **`last.pt`**: The last.pt are the weights from the last epoch of training.
192
+ - **`best.pt`**: The best.pt weights for the iteration that achieved the best fitness score.
193
+
194
+ Using these results, you can make more informed decisions for your future model trainings and analyses. Feel free to consult these artifacts to understand how well your model performed and how you might improve it further.
195
+
196
+ ## Conclusion
197
+
198
+ The hyperparameter tuning process in Ultralytics YOLO is simplified yet powerful, thanks to its genetic algorithm-based approach focused on mutation. Following the steps outlined in this guide will assist you in systematically tuning your model to achieve better performance.
199
+
200
+ ### Further Reading
201
+
202
+ 1. [Hyperparameter Optimization in Wikipedia](https://en.wikipedia.org/wiki/Hyperparameter_optimization)
203
+ 2. [YOLOv5 Hyperparameter Evolution Guide](../yolov5/tutorials/hyperparameter_evolution.md)
204
+ 3. [Efficient Hyperparameter Tuning with Ray Tune and YOLOv8](../integrations/ray-tune.md)
205
+
206
+ For deeper insights, you can explore the `Tuner` class source code and accompanying documentation. Should you have any questions, feature requests, or need further assistance, feel free to reach out to us on [GitHub](https://github.com/ultralytics/ultralytics/issues/new/choose) or [Discord](https://ultralytics.com/discord).
docs/en/guides/index.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: In-depth exploration of Ultralytics' YOLO. Learn about the YOLO object detection model, how to train it on custom data, multi-GPU training, exporting, predicting, deploying, and more.
4
+ keywords: Ultralytics, YOLO, Deep Learning, Object detection, PyTorch, Tutorial, Multi-GPU training, Custom data training, SAHI, Tiled Inference
5
+ ---
6
+
7
+ # Comprehensive Tutorials to Ultralytics YOLO
8
+
9
+ Welcome to the Ultralytics' YOLO 🚀 Guides! Our comprehensive tutorials cover various aspects of the YOLO object detection model, ranging from training and prediction to deployment. Built on PyTorch, YOLO stands out for its exceptional speed and accuracy in real-time object detection tasks.
10
+
11
+ Whether you're a beginner or an expert in deep learning, our tutorials offer valuable insights into the implementation and optimization of YOLO for your computer vision projects. Let's dive in!
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/96NkhsV-W1U"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Ultralytics YOLOv8 Guides Overview
22
+ </p>
23
+
24
+ ## Guides
25
+
26
+ Here's a compilation of in-depth guides to help you master different aspects of Ultralytics YOLO.
27
+
28
+ - [YOLO Common Issues](yolo-common-issues.md) ⭐ RECOMMENDED: Practical solutions and troubleshooting tips to the most frequently encountered issues when working with Ultralytics YOLO models.
29
+ - [YOLO Performance Metrics](yolo-performance-metrics.md) ⭐ ESSENTIAL: Understand the key metrics like mAP, IoU, and F1 score used to evaluate the performance of your YOLO models. Includes practical examples and tips on how to improve detection accuracy and speed.
30
+ - [Model Deployment Options](model-deployment-options.md): Overview of YOLO model deployment formats like ONNX, OpenVINO, and TensorRT, with pros and cons for each to inform your deployment strategy.
31
+ - [K-Fold Cross Validation](kfold-cross-validation.md) 🚀 NEW: Learn how to improve model generalization using K-Fold cross-validation technique.
32
+ - [Hyperparameter Tuning](hyperparameter-tuning.md) 🚀 NEW: Discover how to optimize your YOLO models by fine-tuning hyperparameters using the Tuner class and genetic evolution algorithms.
33
+ - [SAHI Tiled Inference](sahi-tiled-inference.md) 🚀 NEW: Comprehensive guide on leveraging SAHI's sliced inference capabilities with YOLOv8 for object detection in high-resolution images.
34
+ - [AzureML Quickstart](azureml-quickstart.md) 🚀 NEW: Get up and running with Ultralytics YOLO models on Microsoft's Azure Machine Learning platform. Learn how to train, deploy, and scale your object detection projects in the cloud.
35
+ - [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a [Conda](https://anaconda.org/conda-forge/ultralytics) environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda.
36
+ - [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with [Docker](https://hub.docker.com/r/ultralytics/ultralytics). Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment.
37
+ - [Raspberry Pi](raspberry-pi.md) 🚀 NEW: Quickstart tutorial to run YOLO models to the latest Raspberry Pi hardware.
38
+ - [Triton Inference Server Integration](triton-inference-server.md) 🚀 NEW: Dive into the integration of Ultralytics YOLOv8 with NVIDIA's Triton Inference Server for scalable and efficient deep learning inference deployments.
39
+ - [YOLO Thread-Safe Inference](yolo-thread-safe-inference.md) 🚀 NEW: Guidelines for performing inference with YOLO models in a thread-safe manner. Learn the importance of thread safety and best practices to prevent race conditions and ensure consistent predictions.
40
+ - [Isolating Segmentation Objects](isolating-segmentation-objects.md) 🚀 NEW: Step-by-step recipe and explanation on how to extract and/or isolate objects from images using Ultralytics Segmentation.
41
+ - [Edge TPU on Raspberry Pi](coral-edge-tpu-on-raspberry-pi.md): [Google Edge TPU](https://coral.ai/products/accelerator) accelerates YOLO inference on [Raspberry Pi](https://www.raspberrypi.com/).
42
+ - [View Inference Images in a Terminal](view-results-in-terminal.md): Use VSCode's integrated terminal to view inference results when using Remote Tunnel or SSH sessions.
43
+ - [OpenVINO Latency vs Throughput Modes](optimizing-openvino-latency-vs-throughput-modes.md) - Learn latency and throughput optimization techniques for peak YOLO inference performance.
44
+
45
+ ## Real-World Projects
46
+
47
+ - [Object Counting](object-counting.md) 🚀 NEW: Explore the process of real-time object counting with Ultralytics YOLOv8 and acquire the knowledge to effectively count objects in a live video stream.
48
+ - [Object Cropping](object-cropping.md) 🚀 NEW: Explore object cropping using YOLOv8 for precise extraction of objects from images and videos.
49
+ - [Object Blurring](object-blurring.md) 🚀 NEW: Apply object blurring with YOLOv8 for privacy protection in image and video processing.
50
+ - [Workouts Monitoring](workouts-monitoring.md) 🚀 NEW: Discover the comprehensive approach to monitoring workouts with Ultralytics YOLOv8. Acquire the skills and insights necessary to effectively use YOLOv8 for tracking and analyzing various aspects of fitness routines in real time.
51
+ - [Objects Counting in Regions](region-counting.md) 🚀 NEW: Explore counting objects in specific regions with Ultralytics YOLOv8 for precise and efficient object detection in varied areas.
52
+ - [Security Alarm System](security-alarm-system.md) 🚀 NEW: Discover the process of creating a security alarm system with Ultralytics YOLOv8. This system triggers alerts upon detecting new objects in the frame. Subsequently, you can customize the code to align with your specific use case.
53
+ - [Heatmaps](heatmaps.md) 🚀 NEW: Elevate your understanding of data with our Detection Heatmaps! These intuitive visual tools use vibrant color gradients to vividly illustrate the intensity of data values across a matrix. Essential in computer vision, heatmaps are skillfully designed to highlight areas of interest, providing an immediate, impactful way to interpret spatial information.
54
+ - [Instance Segmentation with Object Tracking](instance-segmentation-and-tracking.md) 🚀 NEW: Explore our feature on [Object Segmentation](https://docs.ultralytics.com/tasks/segment/) in Bounding Boxes Shape, providing a visual representation of precise object boundaries for enhanced understanding and analysis.
55
+ - [VisionEye View Objects Mapping](vision-eye.md) 🚀 NEW: This feature aim computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint.
56
+ - [Speed Estimation](speed-estimation.md) 🚀 NEW: Speed estimation in computer vision relies on analyzing object motion through techniques like [object tracking](https://docs.ultralytics.com/modes/track/), crucial for applications like autonomous vehicles and traffic monitoring.
57
+ - [Distance Calculation](distance-calculation.md) 🚀 NEW: Distance calculation, which involves measuring the separation between two objects within a defined space, is a crucial aspect. In the context of Ultralytics YOLOv8, the method employed for this involves using the bounding box centroid to determine the distance associated with user-highlighted bounding boxes.
58
+
59
+ ## Contribute to Our Guides
60
+
61
+ We welcome contributions from the community! If you've mastered a particular aspect of Ultralytics YOLO that's not yet covered in our guides, we encourage you to share your expertise. Writing a guide is a great way to give back to the community and help us make our documentation more comprehensive and user-friendly.
62
+
63
+ To get started, please read our [Contributing Guide](../help/contributing.md) for guidelines on how to open up a Pull Request (PR) 🛠️. We look forward to your contributions!
64
+
65
+ Let's work together to make the Ultralytics YOLO ecosystem more robust and versatile 🙏!
docs/en/guides/instance-segmentation-and-tracking.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Instance Segmentation with Object Tracking using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Instance Segmentation, Object Detection, Object Tracking, Bounding Box, Computer Vision, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # Instance Segmentation and Tracking using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Instance Segmentation?
10
+
11
+ [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) instance segmentation involves identifying and outlining individual objects in an image, providing a detailed understanding of spatial distribution. Unlike semantic segmentation, it uniquely labels and precisely delineates each object, crucial for tasks like object detection and medical imaging.
12
+
13
+ There are two types of instance segmentation tracking available in the Ultralytics package:
14
+
15
+ - **Instance Segmentation with Class Objects:** Each class object is assigned a unique color for clear visual separation.
16
+
17
+ - **Instance Segmentation with Object Tracks:** Every track is represented by a distinct color, facilitating easy identification and tracking.
18
+
19
+ <p align="center">
20
+ <br>
21
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/75G_S1Ngji8"
22
+ title="YouTube video player" frameborder="0"
23
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
24
+ allowfullscreen>
25
+ </iframe>
26
+ <br>
27
+ <strong>Watch:</strong> Instance Segmentation with Object Tracking using Ultralytics YOLOv8
28
+ </p>
29
+
30
+ ## Samples
31
+
32
+ | Instance Segmentation | Instance Segmentation + Object Tracking |
33
+ |:---------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------:|
34
+ | ![Ultralytics Instance Segmentation](https://github.com/RizwanMunawar/ultralytics/assets/62513924/d4ad3499-1f33-4871-8fbc-1be0b2643aa2) | ![Ultralytics Instance Segmentation with Object Tracking](https://github.com/RizwanMunawar/ultralytics/assets/62513924/2e5c38cc-fd5c-4145-9682-fa94ae2010a0) |
35
+ | Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 |
36
+
37
+ !!! Example "Instance Segmentation and Tracking"
38
+
39
+ === "Instance Segmentation"
40
+
41
+ ```python
42
+ import cv2
43
+ from ultralytics import YOLO
44
+ from ultralytics.utils.plotting import Annotator, colors
45
+
46
+ model = YOLO("yolov8n-seg.pt") # segmentation model
47
+ names = model.model.names
48
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
49
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
50
+
51
+ out = cv2.VideoWriter('instance-segmentation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
52
+
53
+ while True:
54
+ ret, im0 = cap.read()
55
+ if not ret:
56
+ print("Video frame is empty or video processing has been successfully completed.")
57
+ break
58
+
59
+ results = model.predict(im0)
60
+ annotator = Annotator(im0, line_width=2)
61
+
62
+ if results[0].masks is not None:
63
+ clss = results[0].boxes.cls.cpu().tolist()
64
+ masks = results[0].masks.xy
65
+ for mask, cls in zip(masks, clss):
66
+ annotator.seg_bbox(mask=mask,
67
+ mask_color=colors(int(cls), True),
68
+ det_label=names[int(cls)])
69
+
70
+ out.write(im0)
71
+ cv2.imshow("instance-segmentation", im0)
72
+
73
+ if cv2.waitKey(1) & 0xFF == ord('q'):
74
+ break
75
+
76
+ out.release()
77
+ cap.release()
78
+ cv2.destroyAllWindows()
79
+
80
+ ```
81
+
82
+ === "Instance Segmentation with Object Tracking"
83
+
84
+ ```python
85
+ import cv2
86
+ from ultralytics import YOLO
87
+ from ultralytics.utils.plotting import Annotator, colors
88
+
89
+ from collections import defaultdict
90
+
91
+ track_history = defaultdict(lambda: [])
92
+
93
+ model = YOLO("yolov8n-seg.pt") # segmentation model
94
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
95
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
96
+
97
+ out = cv2.VideoWriter('instance-segmentation-object-tracking.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
98
+
99
+ while True:
100
+ ret, im0 = cap.read()
101
+ if not ret:
102
+ print("Video frame is empty or video processing has been successfully completed.")
103
+ break
104
+
105
+ annotator = Annotator(im0, line_width=2)
106
+
107
+ results = model.track(im0, persist=True)
108
+
109
+ if results[0].boxes.id is not None and results[0].masks is not None:
110
+ masks = results[0].masks.xy
111
+ track_ids = results[0].boxes.id.int().cpu().tolist()
112
+
113
+ for mask, track_id in zip(masks, track_ids):
114
+ annotator.seg_bbox(mask=mask,
115
+ mask_color=colors(track_id, True),
116
+ track_label=str(track_id))
117
+
118
+ out.write(im0)
119
+ cv2.imshow("instance-segmentation-object-tracking", im0)
120
+
121
+ if cv2.waitKey(1) & 0xFF == ord('q'):
122
+ break
123
+
124
+ out.release()
125
+ cap.release()
126
+ cv2.destroyAllWindows()
127
+ ```
128
+
129
+ ### `seg_bbox` Arguments
130
+
131
+ | Name | Type | Default | Description |
132
+ |---------------|---------|-----------------|----------------------------------------|
133
+ | `mask` | `array` | `None` | Segmentation mask coordinates |
134
+ | `mask_color` | `tuple` | `(255, 0, 255)` | Mask color for every segmented box |
135
+ | `det_label` | `str` | `None` | Label for segmented object |
136
+ | `track_label` | `str` | `None` | Label for segmented and tracked object |
137
+
138
+ ## Note
139
+
140
+ For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below.
docs/en/guides/isolating-segmentation-objects.md ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: A concise guide on isolating segmented objects using Ultralytics.
4
+ keywords: Ultralytics, YOLO, segmentation, Python, object detection, inference, dataset, prediction, instance segmentation, contours, binary mask, object mask, image processing
5
+ ---
6
+
7
+ # Isolating Segmentation Objects
8
+
9
+ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirable to extract the isolated objects from the inference results. This guide provides a generic recipe on how to accomplish this using the Ultralytics [Predict Mode](../modes/predict.md).
10
+
11
+ <p align="center">
12
+ <img src="https://github.com/ultralytics/ultralytics/assets/62214284/1787d76b-ad5f-43f9-a39c-d45c9157f38a" alt="Example Isolated Object Segmentation">
13
+ </p>
14
+
15
+ ## Recipe Walk Through
16
+
17
+ 1. Begin with the necessary imports
18
+
19
+ ```python
20
+ from pathlib import Path
21
+
22
+ import cv2
23
+ import numpy as np
24
+ from ultralytics import YOLO
25
+ ```
26
+
27
+ ???+ tip "Ultralytics Install"
28
+
29
+ See the Ultralytics [Quickstart](../quickstart.md/#install-ultralytics) Installation section for a quick walkthrough on installing the required libraries.
30
+
31
+ ***
32
+
33
+ 2. Load a model and run `predict()` method on a source.
34
+
35
+ ```python
36
+ from ultralytics import YOLO
37
+
38
+ # Load a model
39
+ model = YOLO('yolov8n-seg.pt')
40
+
41
+ # Run inference
42
+ results = model.predict()
43
+ ```
44
+
45
+ !!! question "No Prediction Arguments?"
46
+
47
+ Without specifying a source, the example images from the library will be used:
48
+
49
+ ```
50
+ 'ultralytics/assets/bus.jpg'
51
+ 'ultralytics/assets/zidane.jpg'
52
+ ```
53
+
54
+ This is helpful for rapid testing with the `predict()` method.
55
+
56
+ For additional information about Segmentation Models, visit the [Segment Task](../tasks/segment.md#models) page. To learn more about `predict()` method, see [Predict Mode](../modes/predict.md) section of the Documentation.
57
+
58
+ ***
59
+
60
+ 3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
61
+
62
+ ```{ .py .annotate }
63
+ # (2) Iterate detection results (helpful for multiple images)
64
+ for r in res:
65
+ img = np.copy(r.orig_img)
66
+ img_name = Path(r.path).stem # source image base-name
67
+
68
+ # Iterate each object contour (multiple detections)
69
+ for ci,c in enumerate(r):
70
+ # (1) Get detection class name
71
+ label = c.names[c.boxes.cls.tolist().pop()]
72
+
73
+ ```
74
+
75
+ 1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md#boxes).
76
+ 2. To learn more about `predict()` results see [Working with Results for Predict Mode](../modes/predict.md#working-with-results)
77
+
78
+ ??? info "For-Loop"
79
+
80
+ A single image will only iterate the first loop once. A single image with only a single detection will iterate each loop _only_ once.
81
+
82
+ ***
83
+
84
+ 4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
85
+
86
+ ![Binary Mask Image](https://github.com/ultralytics/ultralytics/assets/62214284/59bce684-fdda-4b17-8104-0b4b51149aca){ width="240", align="right" }
87
+
88
+ ```{ .py .annotate }
89
+ # Create binary mask
90
+ b_mask = np.zeros(img.shape[:2], np.uint8)
91
+
92
+ # (1) Extract contour result
93
+ contour = c.masks.xy.pop()
94
+ # (2) Changing the type
95
+ contour = contour.astype(np.int32)
96
+ # (3) Reshaping
97
+ contour = contour.reshape(-1, 1, 2)
98
+
99
+
100
+ # Draw contour onto mask
101
+ _ = cv2.drawContours(b_mask,
102
+ [contour],
103
+ -1,
104
+ (255, 255, 255),
105
+ cv2.FILLED)
106
+
107
+ ```
108
+
109
+ 1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks).
110
+
111
+ 2. Here, the values are cast into `np.int32` for compatibility with `drawContours()` function from OpenCV.
112
+
113
+ 3. The OpenCV `drawContours()` function expects contours to have a shape of `[N, 1, 2]` expand section below for more details.
114
+
115
+ <details>
116
+ <summary> Expand to understand what is happening when defining the <code>contour</code> variable.</summary>
117
+ <p>
118
+
119
+ - `c.masks.xy` :: Provides the coordinates of the mask contour points in the format `(x, y)`. For more details, refer to the [Masks Section from Predict Mode](../modes/predict.md#masks).
120
+
121
+ - `.pop()` :: As `masks.xy` is a list containing a single element, this element is extracted using the `pop()` method.
122
+
123
+ - `.astype(np.int32)` :: Using `masks.xy` will return with a data type of `float32`, but this won't be compatible with the OpenCV `drawContours()` function, so this will change the data type to `int32` for compatibility.
124
+
125
+ - `.reshape(-1, 1, 2)` :: Reformats the data into the required shape of `[N, 1, 2]` where `N` is the number of contour points, with each point represented by a single entry `1`, and the entry is composed of `2` values. The `-1` denotes that the number of values along this dimension is flexible.
126
+
127
+ </details>
128
+ <p></p>
129
+ <details>
130
+ <summary> Expand for an explanation of the <code>drawContours()</code> configuration.</summary>
131
+ <p>
132
+
133
+ - Encapsulating the `contour` variable within square brackets, `[contour]`, was found to effectively generate the desired contour mask during testing.
134
+
135
+ - The value `-1` specified for the `drawContours()` parameter instructs the function to draw all contours present in the image.
136
+
137
+ - The `tuple` `(255, 255, 255)` represents the color white, which is the desired color for drawing the contour in this binary mask.
138
+
139
+ - The addition of `cv2.FILLED` will color all pixels enclosed by the contour boundary the same, in this case, all enclosed pixels will be white.
140
+
141
+ - See [OpenCV Documentation on `drawContours()`](https://docs.opencv.org/4.8.0/d6/d6e/group__imgproc__draw.html#ga746c0625f1781f1ffc9056259103edbc) for more information.
142
+
143
+ </details>
144
+ <p></p>
145
+
146
+ ***
147
+
148
+ 5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
149
+
150
+ ### Object Isolation Options
151
+
152
+ !!! example ""
153
+
154
+ === "Black Background Pixels"
155
+
156
+ ```py
157
+ # Create 3-channel mask
158
+ mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
159
+
160
+ # Isolate object with binary mask
161
+ isolated = cv2.bitwise_and(mask3ch, img)
162
+
163
+ ```
164
+
165
+ ??? question "How does this work?"
166
+
167
+ - First, the binary mask is first converted from a single-channel image to a three-channel image. This conversion is necessary for the subsequent step where the mask and the original image are combined. Both images must have the same number of channels to be compatible with the blending operation.
168
+
169
+ - The original image and the three-channel binary mask are merged using the OpenCV function `bitwise_and()`. This operation retains <u>only</u> pixel values that are greater than zero `(> 0)` from both images. Since the mask pixels are greater than zero `(> 0)` <u>only</u> within the contour region, the pixels remaining from the original image are those that overlap with the contour.
170
+
171
+ ### Isolate with Black Pixels: Sub-options
172
+
173
+ ??? info "Full-size Image"
174
+
175
+ There are no additional steps required if keeping full size image.
176
+
177
+ <figure markdown>
178
+ ![Example Full size Isolated Object Image Black Background](https://github.com/ultralytics/ultralytics/assets/62214284/845c00d0-52a6-4b1e-8010-4ba73e011b99){ width=240 }
179
+ <figcaption>Example full-size output</figcaption>
180
+ </figure>
181
+
182
+ ??? info "Cropped object Image"
183
+
184
+ Additional steps required to crop image to only include object region.
185
+
186
+ ![Example Crop Isolated Object Image Black Background](https://github.com/ultralytics/ultralytics/assets/62214284/103dbf90-c169-4f77-b791-76cdf09c6f22){ align="right" }
187
+ ``` { .py .annotate }
188
+ # (1) Bounding box coordinates
189
+ x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32)
190
+ # Crop image to object region
191
+ iso_crop = isolated[y1:y2, x1:x2]
192
+
193
+ ```
194
+
195
+ 1. For more information on bounding box results, see [Boxes Section from Predict Mode](../modes/predict.md/#boxes)
196
+
197
+ ??? question "What does this code do?"
198
+
199
+ - The `c.boxes.xyxy.cpu().numpy()` call retrieves the bounding boxes as a NumPy array in the `xyxy` format, where `xmin`, `ymin`, `xmax`, and `ymax` represent the coordinates of the bounding box rectangle. See [Boxes Section from Predict Mode](../modes/predict.md/#boxes) for more details.
200
+
201
+ - The `squeeze()` operation removes any unnecessary dimensions from the NumPy array, ensuring it has the expected shape.
202
+
203
+ - Converting the coordinate values using `.astype(np.int32)` changes the box coordinates data type from `float32` to `int32`, making them compatible for image cropping using index slices.
204
+
205
+ - Finally, the bounding box region is cropped from the image using index slicing. The bounds are defined by the `[ymin:ymax, xmin:xmax]` coordinates of the detection bounding box.
206
+
207
+ === "Transparent Background Pixels"
208
+
209
+ ```py
210
+ # Isolate object with transparent background (when saved as PNG)
211
+ isolated = np.dstack([img, b_mask])
212
+
213
+ ```
214
+
215
+ ??? question "How does this work?"
216
+
217
+ - Using the NumPy `dstack()` function (array stacking along depth-axis) in conjunction with the binary mask generated, will create an image with four channels. This allows for all pixels outside of the object contour to be transparent when saving as a `PNG` file.
218
+
219
+ ### Isolate with Transparent Pixels: Sub-options
220
+
221
+ ??? info "Full-size Image"
222
+
223
+ There are no additional steps required if keeping full size image.
224
+
225
+ <figure markdown>
226
+ ![Example Full size Isolated Object Image No Background](https://github.com/ultralytics/ultralytics/assets/62214284/b1043ee0-369a-4019-941a-9447a9771042){ width=240 }
227
+ <figcaption>Example full-size output + transparent background</figcaption>
228
+ </figure>
229
+
230
+ ??? info "Cropped object Image"
231
+
232
+ Additional steps required to crop image to only include object region.
233
+
234
+ ![Example Crop Isolated Object Image No Background](https://github.com/ultralytics/ultralytics/assets/62214284/5910244f-d1e1-44af-af7f-6dea4c688da8){ align="right" }
235
+ ``` { .py .annotate }
236
+ # (1) Bounding box coordinates
237
+ x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32)
238
+ # Crop image to object region
239
+ iso_crop = isolated[y1:y2, x1:x2]
240
+
241
+ ```
242
+
243
+ 1. For more information on bounding box results, see [Boxes Section from Predict Mode](../modes/predict.md/#boxes)
244
+
245
+ ??? question "What does this code do?"
246
+
247
+ - When using `c.boxes.xyxy.cpu().numpy()`, the bounding boxes are returned as a NumPy array, using the `xyxy` box coordinates format, which correspond to the points `xmin, ymin, xmax, ymax` for the bounding box (rectangle), see [Boxes Section from Predict Mode](../modes/predict.md/#boxes) for more information.
248
+
249
+ - Adding `squeeze()` ensures that any extraneous dimensions are removed from the NumPy array.
250
+
251
+ - Converting the coordinate values using `.astype(np.int32)` changes the box coordinates data type from `float32` to `int32` which will be compatible when cropping the image using index slices.
252
+
253
+ - Finally the image region for the bounding box is cropped using index slicing, where the bounds are set using the `[ymin:ymax, xmin:xmax]` coordinates of the detection bounding box.
254
+
255
+ ??? question "What if I want the cropped object **including** the background?"
256
+
257
+ This is a built in feature for the Ultralytics library. See the `save_crop` argument for [Predict Mode Inference Arguments](../modes/predict.md/#inference-arguments) for details.
258
+
259
+ ***
260
+
261
+ 6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
262
+
263
+ - **NOTE:** this step is optional and can be skipped if not required for your specific use case.
264
+
265
+ ??? example "Example Final Step"
266
+
267
+ ```py
268
+ # Save isolated object to file
269
+ _ = cv2.imwrite(f'{img_name}_{label}-{ci}.png', iso_crop)
270
+ ```
271
+
272
+ - In this example, the `img_name` is the base-name of the source image file, `label` is the detected class-name, and `ci` is the index of the object detection (in case of multiple instances with the same class name).
273
+
274
+ ## Full Example code
275
+
276
+ Here, all steps from the previous section are combined into a single block of code. For repeated use, it would be optimal to define a function to do some or all commands contained in the `for`-loops, but that is an exercise left to the reader.
277
+
278
+ ```{ .py .annotate }
279
+ from pathlib import Path
280
+
281
+ import cv2
282
+ import numpy as np
283
+ from ultralytics import YOLO
284
+
285
+ m = YOLO('yolov8n-seg.pt')#(4)!
286
+ res = m.predict()#(3)!
287
+
288
+ # iterate detection results (5)
289
+ for r in res:
290
+ img = np.copy(r.orig_img)
291
+ img_name = Path(r.path).stem
292
+
293
+ # iterate each object contour (6)
294
+ for ci,c in enumerate(r):
295
+ label = c.names[c.boxes.cls.tolist().pop()]
296
+
297
+ b_mask = np.zeros(img.shape[:2], np.uint8)
298
+
299
+ # Create contour mask (1)
300
+ contour = c.masks.xy.pop().astype(np.int32).reshape(-1, 1, 2)
301
+ _ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
302
+
303
+ # Choose one:
304
+
305
+ # OPTION-1: Isolate object with black background
306
+ mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
307
+ isolated = cv2.bitwise_and(mask3ch, img)
308
+
309
+ # OPTION-2: Isolate object with transparent background (when saved as PNG)
310
+ isolated = np.dstack([img, b_mask])
311
+
312
+ # OPTIONAL: detection crop (from either OPT1 or OPT2)
313
+ x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32)
314
+ iso_crop = isolated[y1:y2, x1:x2]
315
+
316
+ # TODO your actions go here (2)
317
+
318
+ ```
319
+
320
+ 1. The line populating `contour` is combined into a single line here, where it was split to multiple above.
321
+ 2. {==What goes here is up to you!==}
322
+ 3. See [Predict Mode](../modes/predict.md) for additional information.
323
+ 4. See [Segment Task](../tasks/segment.md#models) for more information.
324
+ 5. Learn more about [Working with Results](../modes/predict.md#working-with-results)
325
+ 6. Learn more about [Segmentation Mask Results](../modes/predict.md#masks)
docs/en/guides/kfold-cross-validation.md ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: An in-depth guide demonstrating the implementation of K-Fold Cross Validation with the Ultralytics ecosystem for object detection datasets, leveraging Python, YOLO, and sklearn.
4
+ keywords: K-Fold cross validation, Ultralytics, YOLO detection format, Python, sklearn, object detection
5
+ ---
6
+
7
+ # K-Fold Cross Validation with Ultralytics
8
+
9
+ ## Introduction
10
+
11
+ This comprehensive guide illustrates the implementation of K-Fold Cross Validation for object detection datasets within the Ultralytics ecosystem. We'll leverage the YOLO detection format and key Python libraries such as sklearn, pandas, and PyYaml to guide you through the necessary setup, the process of generating feature vectors, and the execution of a K-Fold dataset split.
12
+
13
+ <p align="center">
14
+ <img width="800" src="https://user-images.githubusercontent.com/26833433/258589390-8d815058-ece8-48b9-a94e-0e1ab53ea0f6.png" alt="K-Fold Cross Validation Overview">
15
+ </p>
16
+
17
+ Whether your project involves the Fruit Detection dataset or a custom data source, this tutorial aims to help you comprehend and apply K-Fold Cross Validation to bolster the reliability and robustness of your machine learning models. While we're applying `k=5` folds for this tutorial, keep in mind that the optimal number of folds can vary depending on your dataset and the specifics of your project.
18
+
19
+ Without further ado, let's dive in!
20
+
21
+ ## Setup
22
+
23
+ - Your annotations should be in the [YOLO detection format](../datasets/detect/index.md).
24
+
25
+ - This guide assumes that annotation files are locally available.
26
+
27
+ - For our demonstration, we use the [Fruit Detection](https://www.kaggle.com/datasets/lakshaytyagi01/fruit-detection/code) dataset.
28
+ - This dataset contains a total of 8479 images.
29
+ - It includes 6 class labels, each with its total instance counts listed below.
30
+
31
+ | Class Label | Instance Count |
32
+ |:------------|:--------------:|
33
+ | Apple | 7049 |
34
+ | Grapes | 7202 |
35
+ | Pineapple | 1613 |
36
+ | Orange | 15549 |
37
+ | Banana | 3536 |
38
+ | Watermelon | 1976 |
39
+
40
+ - Necessary Python packages include:
41
+
42
+ - `ultralytics`
43
+ - `sklearn`
44
+ - `pandas`
45
+ - `pyyaml`
46
+
47
+ - This tutorial operates with `k=5` folds. However, you should determine the best number of folds for your specific dataset.
48
+
49
+ 1. Initiate a new Python virtual environment (`venv`) for your project and activate it. Use `pip` (or your preferred package manager) to install:
50
+
51
+ - The Ultralytics library: `pip install -U ultralytics`. Alternatively, you can clone the official [repo](https://github.com/ultralytics/ultralytics).
52
+ - Scikit-learn, pandas, and PyYAML: `pip install -U scikit-learn pandas pyyaml`.
53
+
54
+ 2. Verify that your annotations are in the [YOLO detection format](../datasets/detect/index.md).
55
+
56
+ - For this tutorial, all annotation files are found in the `Fruit-Detection/labels` directory.
57
+
58
+ ## Generating Feature Vectors for Object Detection Dataset
59
+
60
+ 1. Start by creating a new Python file and import the required libraries.
61
+
62
+ ```python
63
+ import datetime
64
+ import shutil
65
+ from pathlib import Path
66
+ from collections import Counter
67
+
68
+ import yaml
69
+ import numpy as np
70
+ import pandas as pd
71
+ from ultralytics import YOLO
72
+ from sklearn.model_selection import KFold
73
+ ```
74
+
75
+ 2. Proceed to retrieve all label files for your dataset.
76
+
77
+ ```python
78
+ dataset_path = Path('./Fruit-detection') # replace with 'path/to/dataset' for your custom data
79
+ labels = sorted(dataset_path.rglob("*labels/*.txt")) # all data in 'labels'
80
+ ```
81
+
82
+ 3. Now, read the contents of the dataset YAML file and extract the indices of the class labels.
83
+
84
+ ```python
85
+ yaml_file = 'path/to/data.yaml' # your data YAML with data directories and names dictionary
86
+ with open(yaml_file, 'r', encoding="utf8") as y:
87
+ classes = yaml.safe_load(y)['names']
88
+ cls_idx = sorted(classes.keys())
89
+ ```
90
+
91
+ 4. Initialize an empty `pandas` DataFrame.
92
+
93
+ ```python
94
+ indx = [l.stem for l in labels] # uses base filename as ID (no extension)
95
+ labels_df = pd.DataFrame([], columns=cls_idx, index=indx)
96
+ ```
97
+
98
+ 5. Count the instances of each class-label present in the annotation files.
99
+
100
+ ```python
101
+ for label in labels:
102
+ lbl_counter = Counter()
103
+
104
+ with open(label,'r') as lf:
105
+ lines = lf.readlines()
106
+
107
+ for l in lines:
108
+ # classes for YOLO label uses integer at first position of each line
109
+ lbl_counter[int(l.split(' ')[0])] += 1
110
+
111
+ labels_df.loc[label.stem] = lbl_counter
112
+
113
+ labels_df = labels_df.fillna(0.0) # replace `nan` values with `0.0`
114
+ ```
115
+
116
+ 6. The following is a sample view of the populated DataFrame:
117
+
118
+ ```pandas
119
+ 0 1 2 3 4 5
120
+ '0000a16e4b057580_jpg.rf.00ab48988370f64f5ca8ea4...' 0.0 0.0 0.0 0.0 0.0 7.0
121
+ '0000a16e4b057580_jpg.rf.7e6dce029fb67f01eb19aa7...' 0.0 0.0 0.0 0.0 0.0 7.0
122
+ '0000a16e4b057580_jpg.rf.bc4d31cdcbe229dd022957a...' 0.0 0.0 0.0 0.0 0.0 7.0
123
+ '00020ebf74c4881c_jpg.rf.508192a0a97aa6c4a3b6882...' 0.0 0.0 0.0 1.0 0.0 0.0
124
+ '00020ebf74c4881c_jpg.rf.5af192a2254c8ecc4188a25...' 0.0 0.0 0.0 1.0 0.0 0.0
125
+ ... ... ... ... ... ... ...
126
+ 'ff4cd45896de38be_jpg.rf.c4b5e967ca10c7ced3b9e97...' 0.0 0.0 0.0 0.0 0.0 2.0
127
+ 'ff4cd45896de38be_jpg.rf.ea4c1d37d2884b3e3cbce08...' 0.0 0.0 0.0 0.0 0.0 2.0
128
+ 'ff5fd9c3c624b7dc_jpg.rf.bb519feaa36fc4bf630a033...' 1.0 0.0 0.0 0.0 0.0 0.0
129
+ 'ff5fd9c3c624b7dc_jpg.rf.f0751c9c3aa4519ea3c9d6a...' 1.0 0.0 0.0 0.0 0.0 0.0
130
+ 'fffe28b31f2a70d4_jpg.rf.7ea16bd637ba0711c53b540...' 0.0 6.0 0.0 0.0 0.0 0.0
131
+ ```
132
+
133
+ The rows index the label files, each corresponding to an image in your dataset, and the columns correspond to your class-label indices. Each row represents a pseudo feature-vector, with the count of each class-label present in your dataset. This data structure enables the application of K-Fold Cross Validation to an object detection dataset.
134
+
135
+ ## K-Fold Dataset Split
136
+
137
+ 1. Now we will use the `KFold` class from `sklearn.model_selection` to generate `k` splits of the dataset.
138
+
139
+ - Important:
140
+ - Setting `shuffle=True` ensures a randomized distribution of classes in your splits.
141
+ - By setting `random_state=M` where `M` is a chosen integer, you can obtain repeatable results.
142
+
143
+ ```python
144
+ ksplit = 5
145
+ kf = KFold(n_splits=ksplit, shuffle=True, random_state=20) # setting random_state for repeatable results
146
+
147
+ kfolds = list(kf.split(labels_df))
148
+ ```
149
+
150
+ 2. The dataset has now been split into `k` folds, each having a list of `train` and `val` indices. We will construct a DataFrame to display these results more clearly.
151
+
152
+ ```python
153
+ folds = [f'split_{n}' for n in range(1, ksplit + 1)]
154
+ folds_df = pd.DataFrame(index=indx, columns=folds)
155
+
156
+ for idx, (train, val) in enumerate(kfolds, start=1):
157
+ folds_df[f'split_{idx}'].loc[labels_df.iloc[train].index] = 'train'
158
+ folds_df[f'split_{idx}'].loc[labels_df.iloc[val].index] = 'val'
159
+ ```
160
+
161
+ 3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`.
162
+
163
+ ```python
164
+ fold_lbl_distrb = pd.DataFrame(index=folds, columns=cls_idx)
165
+
166
+ for n, (train_indices, val_indices) in enumerate(kfolds, start=1):
167
+ train_totals = labels_df.iloc[train_indices].sum()
168
+ val_totals = labels_df.iloc[val_indices].sum()
169
+
170
+ # To avoid division by zero, we add a small value (1E-7) to the denominator
171
+ ratio = val_totals / (train_totals + 1E-7)
172
+ fold_lbl_distrb.loc[f'split_{n}'] = ratio
173
+ ```
174
+
175
+ The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
176
+
177
+ 4. Next, we create the directories and dataset YAML files for each split.
178
+
179
+ ```python
180
+ supported_extensions = ['.jpg', '.jpeg', '.png']
181
+
182
+ # Initialize an empty list to store image file paths
183
+ images = []
184
+
185
+ # Loop through supported extensions and gather image files
186
+ for ext in supported_extensions:
187
+ images.extend(sorted((dataset_path / 'images').rglob(f"*{ext}")))
188
+
189
+ # Create the necessary directories and dataset YAML files (unchanged)
190
+ save_path = Path(dataset_path / f'{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val')
191
+ save_path.mkdir(parents=True, exist_ok=True)
192
+ ds_yamls = []
193
+
194
+ for split in folds_df.columns:
195
+ # Create directories
196
+ split_dir = save_path / split
197
+ split_dir.mkdir(parents=True, exist_ok=True)
198
+ (split_dir / 'train' / 'images').mkdir(parents=True, exist_ok=True)
199
+ (split_dir / 'train' / 'labels').mkdir(parents=True, exist_ok=True)
200
+ (split_dir / 'val' / 'images').mkdir(parents=True, exist_ok=True)
201
+ (split_dir / 'val' / 'labels').mkdir(parents=True, exist_ok=True)
202
+
203
+ # Create dataset YAML files
204
+ dataset_yaml = split_dir / f'{split}_dataset.yaml'
205
+ ds_yamls.append(dataset_yaml)
206
+
207
+ with open(dataset_yaml, 'w') as ds_y:
208
+ yaml.safe_dump({
209
+ 'path': split_dir.as_posix(),
210
+ 'train': 'train',
211
+ 'val': 'val',
212
+ 'names': classes
213
+ }, ds_y)
214
+ ```
215
+
216
+ 5. Lastly, copy images and labels into the respective directory ('train' or 'val') for each split.
217
+
218
+ - __NOTE:__ The time required for this portion of the code will vary based on the size of your dataset and your system hardware.
219
+
220
+ ```python
221
+ for image, label in zip(images, labels):
222
+ for split, k_split in folds_df.loc[image.stem].items():
223
+ # Destination directory
224
+ img_to_path = save_path / split / k_split / 'images'
225
+ lbl_to_path = save_path / split / k_split / 'labels'
226
+
227
+ # Copy image and label files to new directory (SamefileError if file already exists)
228
+ shutil.copy(image, img_to_path / image.name)
229
+ shutil.copy(label, lbl_to_path / label.name)
230
+ ```
231
+
232
+ ## Save Records (Optional)
233
+
234
+ Optionally, you can save the records of the K-Fold split and label distribution DataFrames as CSV files for future reference.
235
+
236
+ ```python
237
+ folds_df.to_csv(save_path / "kfold_datasplit.csv")
238
+ fold_lbl_distrb.to_csv(save_path / "kfold_label_distribution.csv")
239
+ ```
240
+
241
+ ## Train YOLO using K-Fold Data Splits
242
+
243
+ 1. First, load the YOLO model.
244
+
245
+ ```python
246
+ weights_path = 'path/to/weights.pt'
247
+ model = YOLO(weights_path, task='detect')
248
+ ```
249
+
250
+ 2. Next, iterate over the dataset YAML files to run training. The results will be saved to a directory specified by the `project` and `name` arguments. By default, this directory is 'exp/runs#' where # is an integer index.
251
+
252
+ ```python
253
+ results = {}
254
+
255
+ # Define your additional arguments here
256
+ batch = 16
257
+ project = 'kfold_demo'
258
+ epochs = 100
259
+
260
+ for k in range(ksplit):
261
+ dataset_yaml = ds_yamls[k]
262
+ model.train(data=dataset_yaml,epochs=epochs, batch=batch, project=project) # include any train arguments
263
+ results[k] = model.metrics # save output metrics for further analysis
264
+ ```
265
+
266
+ ## Conclusion
267
+
268
+ In this guide, we have explored the process of using K-Fold cross-validation for training the YOLO object detection model. We learned how to split our dataset into K partitions, ensuring a balanced class distribution across the different folds.
269
+
270
+ We also explored the procedure for creating report DataFrames to visualize the data splits and label distributions across these splits, providing us a clear insight into the structure of our training and validation sets.
271
+
272
+ Optionally, we saved our records for future reference, which could be particularly useful in large-scale projects or when troubleshooting model performance.
273
+
274
+ Finally, we implemented the actual model training using each split in a loop, saving our training results for further analysis and comparison.
275
+
276
+ This technique of K-Fold cross-validation is a robust way of making the most out of your available data, and it helps to ensure that your model performance is reliable and consistent across different data subsets. This results in a more generalizable and reliable model that is less likely to overfit to specific data patterns.
277
+
278
+ Remember that although we used YOLO in this guide, these steps are mostly transferable to other machine learning models. Understanding these steps allows you to apply cross-validation effectively in your own machine learning projects. Happy coding!
docs/en/guides/model-deployment-options.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: A guide to help determine which deployment option to choose for your YOLOv8 model, including essential considerations.
4
+ keywords: YOLOv8, Deployment, PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, CoreML, TensorFlow, Export
5
+ ---
6
+
7
+ # Understanding YOLOv8’s Deployment Options
8
+
9
+ ## Introduction
10
+
11
+ You've come a long way on your journey with YOLOv8. You've diligently collected data, meticulously annotated it, and put in the hours to train and rigorously evaluate your custom YOLOv8 model. Now, it’s time to put your model to work for your specific application, use case, or project. But there's a critical decision that stands before you: how to export and deploy your model effectively.
12
+
13
+ This guide walks you through YOLOv8’s deployment options and the essential factors to consider to choose the right option for your project.
14
+
15
+ ## How to Select the Right Deployment Option for Your YOLOv8 Model
16
+
17
+ When it's time to deploy your YOLOv8 model, selecting a suitable export format is very important. As outlined in the [Ultralytics YOLOv8 Modes documentation](../modes/export.md#usage-examples), the model.export() function allows for converting your trained model into a variety of formats tailored to diverse environments and performance requirements.
18
+
19
+ The ideal format depends on your model's intended operational context, balancing speed, hardware constraints, and ease of integration. In the following section, we'll take a closer look at each export option, understanding when to choose each one.
20
+
21
+ ### YOLOv8’s Deployment Options
22
+
23
+ Let’s walk through the different YOLOv8 deployment options. For a detailed walkthrough of the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
24
+
25
+ #### PyTorch
26
+
27
+ PyTorch is an open-source machine learning library widely used for applications in deep learning and artificial intelligence. It provides a high level of flexibility and speed, which has made it a favorite among researchers and developers.
28
+
29
+ - **Performance Benchmarks**: PyTorch is known for its ease of use and flexibility, which may result in a slight trade-off in raw performance when compared to other frameworks that are more specialized and optimized.
30
+
31
+ - **Compatibility and Integration**: Offers excellent compatibility with various data science and machine learning libraries in Python.
32
+
33
+ - **Community Support and Ecosystem**: One of the most vibrant communities, with extensive resources for learning and troubleshooting.
34
+
35
+ - **Case Studies**: Commonly used in research prototypes, many academic papers reference models deployed in PyTorch.
36
+
37
+ - **Maintenance and Updates**: Regular updates with active development and support for new features.
38
+
39
+ - **Security Considerations**: Regular patches for security issues, but security is largely dependent on the overall environment it’s deployed in.
40
+
41
+ - **Hardware Acceleration**: Supports CUDA for GPU acceleration, essential for speeding up model training and inference.
42
+
43
+ #### TorchScript
44
+
45
+ TorchScript extends PyTorch’s capabilities by allowing the exportation of models to be run in a C++ runtime environment. This makes it suitable for production environments where Python is unavailable.
46
+
47
+ - **Performance Benchmarks**: Can offer improved performance over native PyTorch, especially in production environments.
48
+
49
+ - **Compatibility and Integration**: Designed for seamless transition from PyTorch to C++ production environments, though some advanced features might not translate perfectly.
50
+
51
+ - **Community Support and Ecosystem**: Benefits from PyTorch’s large community but has a narrower scope of specialized developers.
52
+
53
+ - **Case Studies**: Widely used in industry settings where Python’s performance overhead is a bottleneck.
54
+
55
+ - **Maintenance and Updates**: Maintained alongside PyTorch with consistent updates.
56
+
57
+ - **Security Considerations**: Offers improved security by enabling the running of models in environments without full Python installations.
58
+
59
+ - **Hardware Acceleration**: Inherits PyTorch’s CUDA support, ensuring efficient GPU utilization.
60
+
61
+ #### ONNX
62
+
63
+ The Open Neural Network Exchange (ONNX) is a format that allows for model interoperability across different frameworks, which can be critical when deploying to various platforms.
64
+
65
+ - **Performance Benchmarks**: ONNX models may experience a variable performance depending on the specific runtime they are deployed on.
66
+
67
+ - **Compatibility and Integration**: High interoperability across multiple platforms and hardware due to its framework-agnostic nature.
68
+
69
+ - **Community Support and Ecosystem**: Supported by many organizations, leading to a broad ecosystem and a variety of tools for optimization.
70
+
71
+ - **Case Studies**: Frequently used to move models between different machine learning frameworks, demonstrating its flexibility.
72
+
73
+ - **Maintenance and Updates**: As an open standard, ONNX is regularly updated to support new operations and models.
74
+
75
+ - **Security Considerations**: As with any cross-platform tool, it's essential to ensure secure practices in the conversion and deployment pipeline.
76
+
77
+ - **Hardware Acceleration**: With ONNX Runtime, models can leverage various hardware optimizations.
78
+
79
+ #### OpenVINO
80
+
81
+ OpenVINO is an Intel toolkit designed to facilitate the deployment of deep learning models across Intel hardware, enhancing performance and speed.
82
+
83
+ - **Performance Benchmarks**: Specifically optimized for Intel CPUs, GPUs, and VPUs, offering significant performance boosts on compatible hardware.
84
+
85
+ - **Compatibility and Integration**: Works best within the Intel ecosystem but also supports a range of other platforms.
86
+
87
+ - **Community Support and Ecosystem**: Backed by Intel, with a solid user base especially in the computer vision domain.
88
+
89
+ - **Case Studies**: Often utilized in IoT and edge computing scenarios where Intel hardware is prevalent.
90
+
91
+ - **Maintenance and Updates**: Intel regularly updates OpenVINO to support the latest deep learning models and Intel hardware.
92
+
93
+ - **Security Considerations**: Provides robust security features suitable for deployment in sensitive applications.
94
+
95
+ - **Hardware Acceleration**: Tailored for acceleration on Intel hardware, leveraging dedicated instruction sets and hardware features.
96
+
97
+ For more details on deployment using OpenVINO, refer to the Ultralytics Integration documentation: [Intel OpenVINO Export](../integrations/openvino.md).
98
+
99
+ #### TensorRT
100
+
101
+ TensorRT is a high-performance deep learning inference optimizer and runtime from NVIDIA, ideal for applications needing speed and efficiency.
102
+
103
+ - **Performance Benchmarks**: Delivers top-tier performance on NVIDIA GPUs with support for high-speed inference.
104
+
105
+ - **Compatibility and Integration**: Best suited for NVIDIA hardware, with limited support outside this environment.
106
+
107
+ - **Community Support and Ecosystem**: Strong support network through NVIDIA’s developer forums and documentation.
108
+
109
+ - **Case Studies**: Widely adopted in industries requiring real-time inference on video and image data.
110
+
111
+ - **Maintenance and Updates**: NVIDIA maintains TensorRT with frequent updates to enhance performance and support new GPU architectures.
112
+
113
+ - **Security Considerations**: Like many NVIDIA products, it has a strong emphasis on security, but specifics depend on the deployment environment.
114
+
115
+ - **Hardware Acceleration**: Exclusively designed for NVIDIA GPUs, providing deep optimization and acceleration.
116
+
117
+ #### CoreML
118
+
119
+ CoreML is Apple’s machine learning framework, optimized for on-device performance in the Apple ecosystem, including iOS, macOS, watchOS, and tvOS.
120
+
121
+ - **Performance Benchmarks**: Optimized for on-device performance on Apple hardware with minimal battery usage.
122
+
123
+ - **Compatibility and Integration**: Exclusively for Apple's ecosystem, providing a streamlined workflow for iOS and macOS applications.
124
+
125
+ - **Community Support and Ecosystem**: Strong support from Apple and a dedicated developer community, with extensive documentation and tools.
126
+
127
+ - **Case Studies**: Commonly used in applications that require on-device machine learning capabilities on Apple products.
128
+
129
+ - **Maintenance and Updates**: Regularly updated by Apple to support the latest machine learning advancements and Apple hardware.
130
+
131
+ - **Security Considerations**: Benefits from Apple's focus on user privacy and data security.
132
+
133
+ - **Hardware Acceleration**: Takes full advantage of Apple's neural engine and GPU for accelerated machine learning tasks.
134
+
135
+ #### TF SavedModel
136
+
137
+ TF SavedModel is TensorFlow’s format for saving and serving machine learning models, particularly suited for scalable server environments.
138
+
139
+ - **Performance Benchmarks**: Offers scalable performance in server environments, especially when used with TensorFlow Serving.
140
+
141
+ - **Compatibility and Integration**: Wide compatibility across TensorFlow's ecosystem, including cloud and enterprise server deployments.
142
+
143
+ - **Community Support and Ecosystem**: Large community support due to TensorFlow's popularity, with a vast array of tools for deployment and optimization.
144
+
145
+ - **Case Studies**: Extensively used in production environments for serving deep learning models at scale.
146
+
147
+ - **Maintenance and Updates**: Supported by Google and the TensorFlow community, ensuring regular updates and new features.
148
+
149
+ - **Security Considerations**: Deployment using TensorFlow Serving includes robust security features for enterprise-grade applications.
150
+
151
+ - **Hardware Acceleration**: Supports various hardware accelerations through TensorFlow's backends.
152
+
153
+ #### TF GraphDef
154
+
155
+ TF GraphDef is a TensorFlow format that represents the model as a graph, which is beneficial for environments where a static computation graph is required.
156
+
157
+ - **Performance Benchmarks**: Provides stable performance for static computation graphs, with a focus on consistency and reliability.
158
+
159
+ - **Compatibility and Integration**: Easily integrates within TensorFlow's infrastructure but less flexible compared to SavedModel.
160
+
161
+ - **Community Support and Ecosystem**: Good support from TensorFlow's ecosystem, with many resources available for optimizing static graphs.
162
+
163
+ - **Case Studies**: Useful in scenarios where a static graph is necessary, such as in certain embedded systems.
164
+
165
+ - **Maintenance and Updates**: Regular updates alongside TensorFlow's core updates.
166
+
167
+ - **Security Considerations**: Ensures safe deployment with TensorFlow's established security practices.
168
+
169
+ - **Hardware Acceleration**: Can utilize TensorFlow's hardware acceleration options, though not as flexible as SavedModel.
170
+
171
+ #### TF Lite
172
+
173
+ TF Lite is TensorFlow’s solution for mobile and embedded device machine learning, providing a lightweight library for on-device inference.
174
+
175
+ - **Performance Benchmarks**: Designed for speed and efficiency on mobile and embedded devices.
176
+
177
+ - **Compatibility and Integration**: Can be used on a wide range of devices due to its lightweight nature.
178
+
179
+ - **Community Support and Ecosystem**: Backed by Google, it has a robust community and a growing number of resources for developers.
180
+
181
+ - **Case Studies**: Popular in mobile applications that require on-device inference with minimal footprint.
182
+
183
+ - **Maintenance and Updates**: Regularly updated to include the latest features and optimizations for mobile devices.
184
+
185
+ - **Security Considerations**: Provides a secure environment for running models on end-user devices.
186
+
187
+ - **Hardware Acceleration**: Supports a variety of hardware acceleration options, including GPU and DSP.
188
+
189
+ #### TF Edge TPU
190
+
191
+ TF Edge TPU is designed for high-speed, efficient computing on Google's Edge TPU hardware, perfect for IoT devices requiring real-time processing.
192
+
193
+ - **Performance Benchmarks**: Specifically optimized for high-speed, efficient computing on Google's Edge TPU hardware.
194
+
195
+ - **Compatibility and Integration**: Works exclusively with TensorFlow Lite models on Edge TPU devices.
196
+
197
+ - **Community Support and Ecosystem**: Growing support with resources provided by Google and third-party developers.
198
+
199
+ - **Case Studies**: Used in IoT devices and applications that require real-time processing with low latency.
200
+
201
+ - **Maintenance and Updates**: Continually improved upon to leverage the capabilities of new Edge TPU hardware releases.
202
+
203
+ - **Security Considerations**: Integrates with Google's robust security for IoT and edge devices.
204
+
205
+ - **Hardware Acceleration**: Custom-designed to take full advantage of Google Coral devices.
206
+
207
+ #### TF.js
208
+
209
+ TensorFlow.js (TF.js) is a library that brings machine learning capabilities directly to the browser, offering a new realm of possibilities for web developers and users alike. It allows for the integration of machine learning models in web applications without the need for back-end infrastructure.
210
+
211
+ - **Performance Benchmarks**: Enables machine learning directly in the browser with reasonable performance, depending on the client device.
212
+
213
+ - **Compatibility and Integration**: High compatibility with web technologies, allowing for easy integration into web applications.
214
+
215
+ - **Community Support and Ecosystem**: Support from a community of web and Node.js developers, with a variety of tools for deploying ML models in browsers.
216
+
217
+ - **Case Studies**: Ideal for interactive web applications that benefit from client-side machine learning without the need for server-side processing.
218
+
219
+ - **Maintenance and Updates**: Maintained by the TensorFlow team with contributions from the open-source community.
220
+
221
+ - **Security Considerations**: Runs within the browser's secure context, utilizing the security model of the web platform.
222
+
223
+ - **Hardware Acceleration**: Performance can be enhanced with web-based APIs that access hardware acceleration like WebGL.
224
+
225
+ #### PaddlePaddle
226
+
227
+ PaddlePaddle is an open-source deep learning framework developed by Baidu. It is designed to be both efficient for researchers and easy to use for developers. It's particularly popular in China and offers specialized support for Chinese language processing.
228
+
229
+ - **Performance Benchmarks**: Offers competitive performance with a focus on ease of use and scalability.
230
+
231
+ - **Compatibility and Integration**: Well-integrated within Baidu's ecosystem and supports a wide range of applications.
232
+
233
+ - **Community Support and Ecosystem**: While the community is smaller globally, it's rapidly growing, especially in China.
234
+
235
+ - **Case Studies**: Commonly used in Chinese markets and by developers looking for alternatives to other major frameworks.
236
+
237
+ - **Maintenance and Updates**: Regularly updated with a focus on serving Chinese language AI applications and services.
238
+
239
+ - **Security Considerations**: Emphasizes data privacy and security, catering to Chinese data governance standards.
240
+
241
+ - **Hardware Acceleration**: Supports various hardware accelerations, including Baidu's own Kunlun chips.
242
+
243
+ #### NCNN
244
+
245
+ NCNN is a high-performance neural network inference framework optimized for the mobile platform. It stands out for its lightweight nature and efficiency, making it particularly well-suited for mobile and embedded devices where resources are limited.
246
+
247
+ - **Performance Benchmarks**: Highly optimized for mobile platforms, offering efficient inference on ARM-based devices.
248
+
249
+ - **Compatibility and Integration**: Suitable for applications on mobile phones and embedded systems with ARM architecture.
250
+
251
+ - **Community Support and Ecosystem**: Supported by a niche but active community focused on mobile and embedded ML applications.
252
+
253
+ - **Case Studies**: Favoured for mobile applications where efficiency and speed are critical on Android and other ARM-based systems.
254
+
255
+ - **Maintenance and Updates**: Continuously improved to maintain high performance on a range of ARM devices.
256
+
257
+ - **Security Considerations**: Focuses on running locally on the device, leveraging the inherent security of on-device processing.
258
+
259
+ - **Hardware Acceleration**: Tailored for ARM CPUs and GPUs, with specific optimizations for these architectures.
260
+
261
+ ## Comparative Analysis of YOLOv8 Deployment Options
262
+
263
+ The following table provides a snapshot of the various deployment options available for YOLOv8 models, helping you to assess which may best fit your project needs based on several critical criteria. For an in-depth look at each deployment option's format, please see the [Ultralytics documentation page on export formats](../modes/export.md#export-formats).
264
+
265
+ | Deployment Option | Performance Benchmarks | Compatibility and Integration | Community Support and Ecosystem | Case Studies | Maintenance and Updates | Security Considerations | Hardware Acceleration |
266
+ |-------------------|-------------------------------------------------|------------------------------------------------|-----------------------------------------------|--------------------------------------------|---------------------------------------------|---------------------------------------------------|------------------------------------|
267
+ | PyTorch | Good flexibility; may trade off raw performance | Excellent with Python libraries | Extensive resources and community | Research and prototypes | Regular, active development | Dependent on deployment environment | CUDA support for GPU acceleration |
268
+ | TorchScript | Better for production than PyTorch | Smooth transition from PyTorch to C++ | Specialized but narrower than PyTorch | Industry where Python is a bottleneck | Consistent updates with PyTorch | Improved security without full Python | Inherits CUDA support from PyTorch |
269
+ | ONNX | Variable depending on runtime | High across different frameworks | Broad ecosystem, supported by many orgs | Flexibility across ML frameworks | Regular updates for new operations | Ensure secure conversion and deployment practices | Various hardware optimizations |
270
+ | OpenVINO | Optimized for Intel hardware | Best within Intel ecosystem | Solid in computer vision domain | IoT and edge with Intel hardware | Regular updates for Intel hardware | Robust features for sensitive applications | Tailored for Intel hardware |
271
+ | TensorRT | Top-tier on NVIDIA GPUs | Best for NVIDIA hardware | Strong network through NVIDIA | Real-time video and image inference | Frequent updates for new GPUs | Emphasis on security | Designed for NVIDIA GPUs |
272
+ | CoreML | Optimized for on-device Apple hardware | Exclusive to Apple ecosystem | Strong Apple and developer support | On-device ML on Apple products | Regular Apple updates | Focus on privacy and security | Apple neural engine and GPU |
273
+ | TF SavedModel | Scalable in server environments | Wide compatibility in TensorFlow ecosystem | Large support due to TensorFlow popularity | Serving models at scale | Regular updates by Google and community | Robust features for enterprise | Various hardware accelerations |
274
+ | TF GraphDef | Stable for static computation graphs | Integrates well with TensorFlow infrastructure | Resources for optimizing static graphs | Scenarios requiring static graphs | Updates alongside TensorFlow core | Established TensorFlow security practices | TensorFlow acceleration options |
275
+ | TF Lite | Speed and efficiency on mobile/embedded | Wide range of device support | Robust community, Google backed | Mobile applications with minimal footprint | Latest features for mobile | Secure environment on end-user devices | GPU and DSP among others |
276
+ | TF Edge TPU | Optimized for Google's Edge TPU hardware | Exclusive to Edge TPU devices | Growing with Google and third-party resources | IoT devices requiring real-time processing | Improvements for new Edge TPU hardware | Google's robust IoT security | Custom-designed for Google Coral |
277
+ | TF.js | Reasonable in-browser performance | High with web technologies | Web and Node.js developers support | Interactive web applications | TensorFlow team and community contributions | Web platform security model | Enhanced with WebGL and other APIs |
278
+ | PaddlePaddle | Competitive, easy to use and scalable | Baidu ecosystem, wide application support | Rapidly growing, especially in China | Chinese market and language processing | Focus on Chinese AI applications | Emphasizes data privacy and security | Including Baidu's Kunlun chips |
279
+ | NCNN | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations |
280
+
281
+ This comparative analysis gives you a high-level overview. For deployment, it's essential to consider the specific requirements and constraints of your project, and consult the detailed documentation and resources available for each option.
282
+
283
+ ## Community and Support
284
+
285
+ When you're getting started with YOLOv8, having a helpful community and support can make a significant impact. Here's how to connect with others who share your interests and get the assistance you need.
286
+
287
+ ### Engage with the Broader Community
288
+
289
+ - **GitHub Discussions:** The YOLOv8 repository on GitHub has a "Discussions" section where you can ask questions, report issues, and suggest improvements.
290
+
291
+ - **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and developers.
292
+
293
+ ### Official Documentation and Resources
294
+
295
+ - **Ultralytics YOLOv8 Docs:** The [official documentation](../index.md) provides a comprehensive overview of YOLOv8, along with guides on installation, usage, and troubleshooting.
296
+
297
+ These resources will help you tackle challenges and stay updated on the latest trends and best practices in the YOLOv8 community.
298
+
299
+ ## Conclusion
300
+
301
+ In this guide, we've explored the different deployment options for YOLOv8. We've also discussed the important factors to consider when making your choice. These options allow you to customize your model for various environments and performance requirements, making it suitable for real-world applications.
302
+
303
+ Don't forget that the YOLOv8 and Ultralytics community is a valuable source of help. Connect with other developers and experts to learn unique tips and solutions you might not find in regular documentation. Keep seeking knowledge, exploring new ideas, and sharing your experiences.
304
+
305
+ Happy deploying!
docs/en/guides/object-blurring.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Learn to blur objects using Ultralytics YOLOv8 for privacy in images and videos.
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Object Blurring, Privacy Protection, Image Processing, Video Analysis, AI, Machine Learning
5
+ ---
6
+
7
+ # Object Blurring using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Object Blurring?
10
+
11
+ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves applying a blurring effect to specific detected objects in an image or video. This can be achieved using the YOLOv8 model capabilities to identify and manipulate objects within a given scene.
12
+
13
+ ## Advantages of Object Blurring?
14
+
15
+ - **Privacy Protection**: Object blurring is an effective tool for safeguarding privacy by concealing sensitive or personally identifiable information in images or videos.
16
+ - **Selective Focus**: YOLOv8 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
17
+ - **Real-time Processing**: YOLOv8's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
18
+
19
+ !!! Example "Object Blurring using YOLOv8 Example"
20
+
21
+ === "Object Blurring"
22
+
23
+ ```python
24
+ from ultralytics import YOLO
25
+ from ultralytics.utils.plotting import Annotator, colors
26
+ import cv2
27
+
28
+ model = YOLO("yolov8n.pt")
29
+ names = model.names
30
+
31
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
32
+ assert cap.isOpened(), "Error reading video file"
33
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
34
+
35
+ # Blur ratio
36
+ blur_ratio = 50
37
+
38
+ # Video writer
39
+ video_writer = cv2.VideoWriter("object_blurring_output.avi",
40
+ cv2.VideoWriter_fourcc(*'mp4v'),
41
+ fps, (w, h))
42
+
43
+ while cap.isOpened():
44
+ success, im0 = cap.read()
45
+ if not success:
46
+ print("Video frame is empty or video processing has been successfully completed.")
47
+ break
48
+
49
+ results = model.predict(im0, show=False)
50
+ boxes = results[0].boxes.xyxy.cpu().tolist()
51
+ clss = results[0].boxes.cls.cpu().tolist()
52
+ annotator = Annotator(im0, line_width=2, example=names)
53
+
54
+ if boxes is not None:
55
+ for box, cls in zip(boxes, clss):
56
+ annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
57
+
58
+ obj = im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])]
59
+ blur_obj = cv2.blur(obj, (blur_ratio, blur_ratio))
60
+
61
+ im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])] = blur_obj
62
+
63
+ cv2.imshow("ultralytics", im0)
64
+ video_writer.write(im0)
65
+ if cv2.waitKey(1) & 0xFF == ord('q'):
66
+ break
67
+
68
+ cap.release()
69
+ video_writer.release()
70
+ cv2.destroyAllWindows()
71
+ ```
72
+
73
+ ### Arguments `model.predict`
74
+
75
+ | Name | Type | Default | Description |
76
+ |-----------------|----------------|------------------------|----------------------------------------------------------------------------|
77
+ | `source` | `str` | `'ultralytics/assets'` | source directory for images or videos |
78
+ | `conf` | `float` | `0.25` | object confidence threshold for detection |
79
+ | `iou` | `float` | `0.7` | intersection over union (IoU) threshold for NMS |
80
+ | `imgsz` | `int or tuple` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
81
+ | `half` | `bool` | `False` | use half precision (FP16) |
82
+ | `device` | `None or str` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
83
+ | `max_det` | `int` | `300` | maximum number of detections per image |
84
+ | `vid_stride` | `bool` | `False` | video frame-rate stride |
85
+ | `stream_buffer` | `bool` | `False` | buffer all streaming frames (True) or return the most recent frame (False) |
86
+ | `visualize` | `bool` | `False` | visualize model features |
87
+ | `augment` | `bool` | `False` | apply image augmentation to prediction sources |
88
+ | `agnostic_nms` | `bool` | `False` | class-agnostic NMS |
89
+ | `classes` | `list[int]` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
90
+ | `retina_masks` | `bool` | `False` | use high-resolution segmentation masks |
91
+ | `embed` | `list[int]` | `None` | return feature vectors/embeddings from given layers |
docs/en/guides/object-counting.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Object Counting Using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Object Counting, Object Tracking, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # Object Counting using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Object Counting?
10
+
11
+ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves accurate identification and counting of specific objects in videos and camera streams. YOLOv8 excels in real-time applications, providing efficient and precise object counting for various scenarios like crowd analysis and surveillance, thanks to its state-of-the-art algorithms and deep learning capabilities.
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Ag2e-5_NpS0"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Object Counting using Ultralytics YOLOv8
22
+ </p>
23
+
24
+ ## Advantages of Object Counting?
25
+
26
+ - **Resource Optimization:** Object counting facilitates efficient resource management by providing accurate counts, and optimizing resource allocation in applications like inventory management.
27
+ - **Enhanced Security:** Object counting enhances security and surveillance by accurately tracking and counting entities, aiding in proactive threat detection.
28
+ - **Informed Decision-Making:** Object counting offers valuable insights for decision-making, optimizing processes in retail, traffic management, and various other domains.
29
+
30
+ ## Real World Applications
31
+
32
+ | Logistics | Aquaculture |
33
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------:|
34
+ | ![Conveyor Belt Packets Counting Using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/70e2d106-510c-4c6c-a57a-d34a765aa757) | ![Fish Counting in Sea using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/c60d047b-3837-435f-8d29-bb9fc95d2191) |
35
+ | Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
36
+
37
+ !!! Example "Object Counting using YOLOv8 Example"
38
+
39
+ === "Count in Region"
40
+
41
+ ```python
42
+ from ultralytics import YOLO
43
+ from ultralytics.solutions import object_counter
44
+ import cv2
45
+
46
+ model = YOLO("yolov8n.pt")
47
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
48
+ assert cap.isOpened(), "Error reading video file"
49
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
50
+
51
+ # Define region points
52
+ region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
53
+
54
+ # Video writer
55
+ video_writer = cv2.VideoWriter("object_counting_output.avi",
56
+ cv2.VideoWriter_fourcc(*'mp4v'),
57
+ fps,
58
+ (w, h))
59
+
60
+ # Init Object Counter
61
+ counter = object_counter.ObjectCounter()
62
+ counter.set_args(view_img=True,
63
+ reg_pts=region_points,
64
+ classes_names=model.names,
65
+ draw_tracks=True)
66
+
67
+ while cap.isOpened():
68
+ success, im0 = cap.read()
69
+ if not success:
70
+ print("Video frame is empty or video processing has been successfully completed.")
71
+ break
72
+ tracks = model.track(im0, persist=True, show=False)
73
+
74
+ im0 = counter.start_counting(im0, tracks)
75
+ video_writer.write(im0)
76
+
77
+ cap.release()
78
+ video_writer.release()
79
+ cv2.destroyAllWindows()
80
+ ```
81
+
82
+ === "Count in Polygon"
83
+
84
+ ```python
85
+ from ultralytics import YOLO
86
+ from ultralytics.solutions import object_counter
87
+ import cv2
88
+
89
+ model = YOLO("yolov8n.pt")
90
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
91
+ assert cap.isOpened(), "Error reading video file"
92
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
93
+
94
+ # Define region points as a polygon with 5 points
95
+ region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360), (20, 400)]
96
+
97
+ # Video writer
98
+ video_writer = cv2.VideoWriter("object_counting_output.avi",
99
+ cv2.VideoWriter_fourcc(*'mp4v'),
100
+ fps,
101
+ (w, h))
102
+
103
+ # Init Object Counter
104
+ counter = object_counter.ObjectCounter()
105
+ counter.set_args(view_img=True,
106
+ reg_pts=region_points,
107
+ classes_names=model.names,
108
+ draw_tracks=True)
109
+
110
+ while cap.isOpened():
111
+ success, im0 = cap.read()
112
+ if not success:
113
+ print("Video frame is empty or video processing has been successfully completed.")
114
+ break
115
+ tracks = model.track(im0, persist=True, show=False)
116
+
117
+ im0 = counter.start_counting(im0, tracks)
118
+ video_writer.write(im0)
119
+
120
+ cap.release()
121
+ video_writer.release()
122
+ cv2.destroyAllWindows()
123
+ ```
124
+
125
+ === "Count in Line"
126
+
127
+ ```python
128
+ from ultralytics import YOLO
129
+ from ultralytics.solutions import object_counter
130
+ import cv2
131
+
132
+ model = YOLO("yolov8n.pt")
133
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
134
+ assert cap.isOpened(), "Error reading video file"
135
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
136
+
137
+ # Define line points
138
+ line_points = [(20, 400), (1080, 400)]
139
+
140
+ # Video writer
141
+ video_writer = cv2.VideoWriter("object_counting_output.avi",
142
+ cv2.VideoWriter_fourcc(*'mp4v'),
143
+ fps,
144
+ (w, h))
145
+
146
+ # Init Object Counter
147
+ counter = object_counter.ObjectCounter()
148
+ counter.set_args(view_img=True,
149
+ reg_pts=line_points,
150
+ classes_names=model.names,
151
+ draw_tracks=True)
152
+
153
+ while cap.isOpened():
154
+ success, im0 = cap.read()
155
+ if not success:
156
+ print("Video frame is empty or video processing has been successfully completed.")
157
+ break
158
+ tracks = model.track(im0, persist=True, show=False)
159
+
160
+ im0 = counter.start_counting(im0, tracks)
161
+ video_writer.write(im0)
162
+
163
+ cap.release()
164
+ video_writer.release()
165
+ cv2.destroyAllWindows()
166
+ ```
167
+
168
+ === "Specific Classes"
169
+
170
+ ```python
171
+ from ultralytics import YOLO
172
+ from ultralytics.solutions import object_counter
173
+ import cv2
174
+
175
+ model = YOLO("yolov8n.pt")
176
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
177
+ assert cap.isOpened(), "Error reading video file"
178
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
179
+
180
+ line_points = [(20, 400), (1080, 400)] # line or region points
181
+ classes_to_count = [0, 2] # person and car classes for count
182
+
183
+ # Video writer
184
+ video_writer = cv2.VideoWriter("object_counting_output.avi",
185
+ cv2.VideoWriter_fourcc(*'mp4v'),
186
+ fps,
187
+ (w, h))
188
+
189
+ # Init Object Counter
190
+ counter = object_counter.ObjectCounter()
191
+ counter.set_args(view_img=True,
192
+ reg_pts=line_points,
193
+ classes_names=model.names,
194
+ draw_tracks=True)
195
+
196
+ while cap.isOpened():
197
+ success, im0 = cap.read()
198
+ if not success:
199
+ print("Video frame is empty or video processing has been successfully completed.")
200
+ break
201
+ tracks = model.track(im0, persist=True, show=False,
202
+ classes=classes_to_count)
203
+
204
+ im0 = counter.start_counting(im0, tracks)
205
+ video_writer.write(im0)
206
+
207
+ cap.release()
208
+ video_writer.release()
209
+ cv2.destroyAllWindows()
210
+ ```
211
+
212
+ ???+ tip "Region is Movable"
213
+
214
+ You can move the region anywhere in the frame by clicking on its edges
215
+
216
+ ### Optional Arguments `set_args`
217
+
218
+ | Name | Type | Default | Description |
219
+ |-----------------------|-------------|----------------------------|-----------------------------------------------|
220
+ | `view_img` | `bool` | `False` | Display frames with counts |
221
+ | `view_in_counts` | `bool` | `True` | Display in-counts only on video frame |
222
+ | `view_out_counts` | `bool` | `True` | Display out-counts only on video frame |
223
+ | `line_thickness` | `int` | `2` | Increase bounding boxes thickness |
224
+ | `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
225
+ | `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
226
+ | `region_color` | `RGB Color` | `(255, 0, 255)` | Color of the Object counting Region or Line |
227
+ | `track_thickness` | `int` | `2` | Thickness of Tracking Lines |
228
+ | `draw_tracks` | `bool` | `False` | Enable drawing Track lines |
229
+ | `track_color` | `RGB Color` | `(0, 255, 0)` | Color for each track line |
230
+ | `line_dist_thresh` | `int` | `15` | Euclidean Distance threshold for line counter |
231
+ | `count_txt_thickness` | `int` | `2` | Thickness of Object counts text |
232
+ | `count_txt_color` | `RGB Color` | `(0, 0, 0)` | Foreground color for Object counts text |
233
+ | `count_color` | `RGB Color` | `(255, 255, 255)` | Background color for Object counts text |
234
+ | `region_thickness` | `int` | `5` | Thickness for object counter region or line |
235
+
236
+ ### Arguments `model.track`
237
+
238
+ | Name | Type | Default | Description |
239
+ |-----------|---------|----------------|-------------------------------------------------------------|
240
+ | `source` | `im0` | `None` | source directory for images or videos |
241
+ | `persist` | `bool` | `False` | persisting tracks between frames |
242
+ | `tracker` | `str` | `botsort.yaml` | Tracking method 'bytetrack' or 'botsort' |
243
+ | `conf` | `float` | `0.3` | Confidence Threshold |
244
+ | `iou` | `float` | `0.5` | IOU Threshold |
245
+ | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
246
+ | `verbose` | `bool` | `True` | Display the object tracking results |
docs/en/guides/object-cropping.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Learn how to isolate and extract specific objects from images and videos using YOLOv8 object cropping.
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Object Cropping, Image Analysis, Video Processing, Data Extraction, Python
5
+ ---
6
+
7
+ # Object Cropping using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Object Cropping?
10
+
11
+ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves isolating and extracting specific detected objects from an image or video. The YOLOv8 model capabilities are utilized to accurately identify and delineate objects, enabling precise cropping for further analysis or manipulation.
12
+
13
+ ## Advantages of Object Cropping?
14
+
15
+ - **Focused Analysis**: YOLOv8 facilitates targeted object cropping, allowing for in-depth examination or processing of individual items within a scene.
16
+ - **Reduced Data Volume**: By extracting only relevant objects, object cropping helps in minimizing data size, making it efficient for storage, transmission, or subsequent computational tasks.
17
+ - **Enhanced Precision**: YOLOv8's object detection accuracy ensures that the cropped objects maintain their spatial relationships, preserving the integrity of the visual information for detailed analysis.
18
+
19
+ ## Visuals
20
+
21
+ | Airport Luggage |
22
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
23
+ | ![Conveyor Belt at Airport Suitcases Cropping using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/648f46be-f233-4307-a8e5-046eea38d2e4) |
24
+ | Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
25
+
26
+ !!! Example "Object Cropping using YOLOv8 Example"
27
+
28
+ === "Object Cropping"
29
+
30
+ ```python
31
+ from ultralytics import YOLO
32
+ from ultralytics.utils.plotting import Annotator, colors
33
+ import cv2
34
+ import os
35
+
36
+ model = YOLO("yolov8n.pt")
37
+ names = model.names
38
+
39
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
40
+ assert cap.isOpened(), "Error reading video file"
41
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
42
+
43
+ crop_dir_name = "ultralytics_crop"
44
+ if not os.path.exists(crop_dir_name):
45
+ os.mkdir(crop_dir_name)
46
+
47
+ # Video writer
48
+ video_writer = cv2.VideoWriter("object_cropping_output.avi",
49
+ cv2.VideoWriter_fourcc(*'mp4v'),
50
+ fps, (w, h))
51
+
52
+ idx = 0
53
+ while cap.isOpened():
54
+ success, im0 = cap.read()
55
+ if not success:
56
+ print("Video frame is empty or video processing has been successfully completed.")
57
+ break
58
+
59
+ results = model.predict(im0, show=False)
60
+ boxes = results[0].boxes.xyxy.cpu().tolist()
61
+ clss = results[0].boxes.cls.cpu().tolist()
62
+ annotator = Annotator(im0, line_width=2, example=names)
63
+
64
+ if boxes is not None:
65
+ for box, cls in zip(boxes, clss):
66
+ idx += 1
67
+ annotator.box_label(box, color=colors(int(cls), True), label=names[int(cls)])
68
+
69
+ crop_obj = im0[int(box[1]):int(box[3]), int(box[0]):int(box[2])]
70
+
71
+ cv2.imwrite(os.path.join(crop_dir_name, str(idx)+".png"), crop_obj)
72
+
73
+ cv2.imshow("ultralytics", im0)
74
+ video_writer.write(im0)
75
+
76
+ if cv2.waitKey(1) & 0xFF == ord('q'):
77
+ break
78
+
79
+ cap.release()
80
+ video_writer.release()
81
+ cv2.destroyAllWindows()
82
+ ```
83
+
84
+ ### Arguments `model.predict`
85
+
86
+ | Name | Type | Default | Description |
87
+ |-----------------|----------------|------------------------|----------------------------------------------------------------------------|
88
+ | `source` | `str` | `'ultralytics/assets'` | source directory for images or videos |
89
+ | `conf` | `float` | `0.25` | object confidence threshold for detection |
90
+ | `iou` | `float` | `0.7` | intersection over union (IoU) threshold for NMS |
91
+ | `imgsz` | `int or tuple` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
92
+ | `half` | `bool` | `False` | use half precision (FP16) |
93
+ | `device` | `None or str` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
94
+ | `max_det` | `int` | `300` | maximum number of detections per image |
95
+ | `vid_stride` | `bool` | `False` | video frame-rate stride |
96
+ | `stream_buffer` | `bool` | `False` | buffer all streaming frames (True) or return the most recent frame (False) |
97
+ | `visualize` | `bool` | `False` | visualize model features |
98
+ | `augment` | `bool` | `False` | apply image augmentation to prediction sources |
99
+ | `agnostic_nms` | `bool` | `False` | class-agnostic NMS |
100
+ | `classes` | `list[int]` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
101
+ | `retina_masks` | `bool` | `False` | use high-resolution segmentation masks |
102
+ | `embed` | `list[int]` | `None` | return feature vectors/embeddings from given layers |
docs/en/guides/optimizing-openvino-latency-vs-throughput-modes.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Learn how to optimize Ultralytics YOLOv8 models with Intel OpenVINO for maximum performance. Discover expert techniques to minimize latency and maximize throughput for real-time object detection applications.
4
+ keywords: Ultralytics, YOLOv8, OpenVINO, optimization, latency, throughput, inference, object detection, deep learning, machine learning, guide, Intel
5
+ ---
6
+
7
+ # Optimizing OpenVINO Inference for Ultralytics YOLO Models: A Comprehensive Guide
8
+
9
+ <img width="1024" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/2b181f68-aa91-4514-ba09-497cc3c83b00" alt="OpenVINO Ecosystem">
10
+
11
+ ## Introduction
12
+
13
+ When deploying deep learning models, particularly those for object detection such as Ultralytics YOLO models, achieving optimal performance is crucial. This guide delves into leveraging Intel's OpenVINO toolkit to optimize inference, focusing on latency and throughput. Whether you're working on consumer-grade applications or large-scale deployments, understanding and applying these optimization strategies will ensure your models run efficiently on various devices.
14
+
15
+ ## Optimizing for Latency
16
+
17
+ Latency optimization is vital for applications requiring immediate response from a single model given a single input, typical in consumer scenarios. The goal is to minimize the delay between input and inference result. However, achieving low latency involves careful consideration, especially when running concurrent inferences or managing multiple models.
18
+
19
+ ### Key Strategies for Latency Optimization:
20
+
21
+ - **Single Inference per Device:** The simplest way to achieve low latency is by limiting to one inference at a time per device. Additional concurrency often leads to increased latency.
22
+ - **Leveraging Sub-Devices:** Devices like multi-socket CPUs or multi-tile GPUs can execute multiple requests with minimal latency increase by utilizing their internal sub-devices.
23
+ - **OpenVINO Performance Hints:** Utilizing OpenVINO's `ov::hint::PerformanceMode::LATENCY` for the `ov::hint::performance_mode` property during model compilation simplifies performance tuning, offering a device-agnostic and future-proof approach.
24
+
25
+ ### Managing First-Inference Latency:
26
+
27
+ - **Model Caching:** To mitigate model load and compile times impacting latency, use model caching where possible. For scenarios where caching isn't viable, CPUs generally offer the fastest model load times.
28
+ - **Model Mapping vs. Reading:** To reduce load times, OpenVINO replaced model reading with mapping. However, if the model is on a removable or network drive, consider using `ov::enable_mmap(false)` to switch back to reading.
29
+ - **AUTO Device Selection:** This mode begins inference on the CPU, shifting to an accelerator once ready, seamlessly reducing first-inference latency.
30
+
31
+ ## Optimizing for Throughput
32
+
33
+ Throughput optimization is crucial for scenarios serving numerous inference requests simultaneously, maximizing resource utilization without significantly sacrificing individual request performance.
34
+
35
+ ### Approaches to Throughput Optimization:
36
+
37
+ 1. **OpenVINO Performance Hints:** A high-level, future-proof method to enhance throughput across devices using performance hints.
38
+
39
+ ```python
40
+ import openvino.properties as props
41
+ import openvino.properties.hint as hints
42
+
43
+ config = {hints.performance_mode: hints.PerformanceMode.THROUGHPUT}
44
+ compiled_model = core.compile_model(model, "GPU", config)
45
+ ```
46
+
47
+ 2. **Explicit Batching and Streams:** A more granular approach involving explicit batching and the use of streams for advanced performance tuning.
48
+
49
+ ### Designing Throughput-Oriented Applications:
50
+
51
+ To maximize throughput, applications should:
52
+
53
+ - Process inputs in parallel, making full use of the device's capabilities.
54
+ - Decompose data flow into concurrent inference requests, scheduled for parallel execution.
55
+ - Utilize the Async API with callbacks to maintain efficiency and avoid device starvation.
56
+
57
+ ### Multi-Device Execution:
58
+
59
+ OpenVINO's multi-device mode simplifies scaling throughput by automatically balancing inference requests across devices without requiring application-level device management.
60
+
61
+ ## Conclusion
62
+
63
+ Optimizing Ultralytics YOLO models for latency and throughput with OpenVINO can significantly enhance your application's performance. By carefully applying the strategies outlined in this guide, developers can ensure their models run efficiently, meeting the demands of various deployment scenarios. Remember, the choice between optimizing for latency or throughput depends on your specific application needs and the characteristics of the deployment environment.
64
+
65
+ For more detailed technical information and the latest updates, refer to the [OpenVINO documentation](https://docs.openvino.ai/latest/index.html) and [Ultralytics YOLO repository](https://github.com/ultralytics/ultralytics). These resources provide in-depth guides, tutorials, and community support to help you get the most out of your deep learning models.
66
+
67
+ ---
68
+
69
+ Ensuring your models achieve optimal performance is not just about tweaking configurations; it's about understanding your application's needs and making informed decisions. Whether you're optimizing for real-time responses or maximizing throughput for large-scale processing, the combination of Ultralytics YOLO models and OpenVINO offers a powerful toolkit for developers to deploy high-performance AI solutions.
docs/en/guides/raspberry-pi.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Quick start guide to setting up YOLO on a Raspberry Pi with a Pi Camera using the libcamera stack. Detailed comparison between Raspberry Pi 3, 4 and 5 models.
4
+ keywords: Ultralytics, YOLO, Raspberry Pi, Pi Camera, libcamera, quick start guide, Raspberry Pi 4 vs Raspberry Pi 5, YOLO on Raspberry Pi, hardware setup, machine learning, AI
5
+ ---
6
+
7
+ # Quick Start Guide: Raspberry Pi and Pi Camera with YOLOv5 and YOLOv8
8
+
9
+ This comprehensive guide aims to expedite your journey with YOLO object detection models on a [Raspberry Pi](https://www.raspberrypi.com/) using a [Pi Camera](https://www.raspberrypi.com/products/camera-module-v2/). Whether you're a student, hobbyist, or a professional, this guide is designed to get you up and running in less than 30 minutes. The instructions here are rigorously tested to minimize setup issues, allowing you to focus on utilizing YOLO for your specific projects.
10
+
11
+ <p align="center">
12
+ <br>
13
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/yul4gq_LrOI"
14
+ title="Introducing Raspberry Pi 5" frameborder="0"
15
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
16
+ allowfullscreen>
17
+ </iframe>
18
+ <br>
19
+ <strong>Watch:</strong> Raspberry Pi 5 updates and improvements.
20
+ </p>
21
+
22
+ ## Prerequisites
23
+
24
+ - Raspberry Pi 3, 4 or 5
25
+ - Pi Camera
26
+ - 64-bit Raspberry Pi Operating System
27
+
28
+ Connect the Pi Camera to your Raspberry Pi via a CSI cable and install the 64-bit Raspberry Pi Operating System. Verify your camera with the following command:
29
+
30
+ ```bash
31
+ libcamera-hello
32
+ ```
33
+
34
+ You should see a video feed from your camera.
35
+
36
+ ## Choose Your YOLO Version: YOLOv5 or YOLOv8
37
+
38
+ This guide offers you the flexibility to start with either [YOLOv5](https://github.com/ultralytics/yolov5) or [YOLOv8](https://github.com/ultralytics/ultralytics). Both versions have their unique advantages and use-cases. The choice is yours, but remember, the guide's aim is not just quick setup but also a robust foundation for your future work in object detection.
39
+
40
+ ## Hardware Specifics: At a Glance
41
+
42
+ To assist you in making an informed hardware decision, we've summarized the key hardware specifics of Raspberry Pi 3, 4, and 5 in the table below:
43
+
44
+ | Feature | Raspberry Pi 3 | Raspberry Pi 4 | Raspberry Pi 5 |
45
+ |----------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|
46
+ | **CPU** | 1.2GHz Quad-Core ARM Cortex-A53 | 1.5GHz Quad-core 64-bit ARM Cortex-A72 | 2.4GHz Quad-core 64-bit Arm Cortex-A76 |
47
+ | **RAM** | 1GB LPDDR2 | 2GB, 4GB or 8GB LPDDR4 | *Details not yet available* |
48
+ | **USB Ports** | 4 x USB 2.0 | 2 x USB 2.0, 2 x USB 3.0 | 2 x USB 3.0, 2 x USB 2.0 |
49
+ | **Network** | Ethernet & Wi-Fi 802.11n | Gigabit Ethernet & Wi-Fi 802.11ac | Gigabit Ethernet with PoE+ support, Dual-band 802.11ac Wi-Fi® |
50
+ | **Performance** | Slower, may require lighter YOLO models | Faster, can run complex YOLO models | *Details not yet available* |
51
+ | **Power Requirement** | 2.5A power supply | 3.0A USB-C power supply | *Details not yet available* |
52
+ | **Official Documentation** | [Link](https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2837/README.md) | [Link](https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2711/README.md) | [Link](https://www.raspberrypi.com/news/introducing-raspberry-pi-5/) |
53
+
54
+ Please make sure to follow the instructions specific to your Raspberry Pi model to ensure a smooth setup process.
55
+
56
+ ## Quick Start with YOLOv5
57
+
58
+ This section outlines how to set up YOLOv5 on a Raspberry Pi with a Pi Camera. These steps are designed to be compatible with the libcamera camera stack introduced in Raspberry Pi OS Bullseye.
59
+
60
+ ### Install Necessary Packages
61
+
62
+ 1. Update the Raspberry Pi:
63
+
64
+ ```bash
65
+ sudo apt-get update
66
+ sudo apt-get upgrade -y
67
+ sudo apt-get autoremove -y
68
+ ```
69
+
70
+ 2. Clone the YOLOv5 repository:
71
+
72
+ ```bash
73
+ cd ~
74
+ git clone https://github.com/Ultralytics/yolov5.git
75
+ ```
76
+
77
+ 3. Install the required dependencies:
78
+
79
+ ```bash
80
+ cd ~/yolov5
81
+ pip3 install -r requirements.txt
82
+ ```
83
+
84
+ 4. For Raspberry Pi 3, install compatible versions of PyTorch and Torchvision (skip for Raspberry Pi 4):
85
+
86
+ ```bash
87
+ pip3 uninstall torch torchvision
88
+ pip3 install torch==1.11.0 torchvision==0.12.0
89
+ ```
90
+
91
+ ### Modify `detect.py`
92
+
93
+ To enable TCP streams via SSH or the CLI, minor modifications are needed in `detect.py`.
94
+
95
+ 1. Open `detect.py`:
96
+
97
+ ```bash
98
+ sudo nano ~/yolov5/detect.py
99
+ ```
100
+
101
+ 2. Find and modify the `is_url` line to accept TCP streams:
102
+
103
+ ```python
104
+ is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://', 'tcp://'))
105
+ ```
106
+
107
+ 3. Comment out the `view_img` line:
108
+
109
+ ```python
110
+ # view_img = check_imshow(warn=True)
111
+ ```
112
+
113
+ 4. Save and exit:
114
+
115
+ ```bash
116
+ CTRL + O -> ENTER -> CTRL + X
117
+ ```
118
+
119
+ ### Initiate TCP Stream with Libcamera
120
+
121
+ 1. Start the TCP stream:
122
+
123
+ ```bash
124
+ libcamera-vid -n -t 0 --width 1280 --height 960 --framerate 1 --inline --listen -o tcp://127.0.0.1:8888
125
+ ```
126
+
127
+ Keep this terminal session running for the next steps.
128
+
129
+ ### Perform YOLOv5 Inference
130
+
131
+ 1. Run the YOLOv5 detection:
132
+
133
+ ```bash
134
+ cd ~/yolov5
135
+ python3 detect.py --source=tcp://127.0.0.1:8888
136
+ ```
137
+
138
+ ## Quick Start with YOLOv8
139
+
140
+ Follow this section if you are interested in setting up YOLOv8 instead. The steps are quite similar but are tailored for YOLOv8's specific needs.
141
+
142
+ ### Install Necessary Packages
143
+
144
+ 1. Update the Raspberry Pi:
145
+
146
+ ```bash
147
+ sudo apt-get update
148
+ sudo apt-get upgrade -y
149
+ sudo apt-get autoremove -y
150
+ ```
151
+
152
+ 2. Install the `ultralytics` Python package:
153
+
154
+ ```bash
155
+ pip3 install ultralytics
156
+ ```
157
+
158
+ 3. Reboot:
159
+
160
+ ```bash
161
+ sudo reboot
162
+ ```
163
+
164
+ ### Initiate TCP Stream with Libcamera
165
+
166
+ 1. Start the TCP stream:
167
+
168
+ ```bash
169
+ libcamera-vid -n -t 0 --width 1280 --height 960 --framerate 1 --inline --listen -o tcp://127.0.0.1:8888
170
+ ```
171
+
172
+ ### Perform YOLOv8 Inference
173
+
174
+ To perform inference with YOLOv8, you can use the following Python code snippet:
175
+
176
+ ```python
177
+ from ultralytics import YOLO
178
+
179
+ model = YOLO('yolov8n.pt')
180
+ results = model('tcp://127.0.0.1:8888', stream=True)
181
+
182
+ while True:
183
+ for result in results:
184
+ boxes = result.boxes
185
+ probs = result.probs
186
+ ```
187
+
188
+ ## Next Steps
189
+
190
+ Congratulations on successfully setting up YOLO on your Raspberry Pi! For further learning and support, visit [Ultralytics](https://ultralytics.com/) and [Kashmir World Foundation](https://www.kashmirworldfoundation.org/).
191
+
192
+ ## Acknowledgements and Citations
193
+
194
+ This guide was initially created by Daan Eeltink for Kashmir World Foundation, an organization dedicated to the use of YOLO for the conservation of endangered species. We acknowledge their pioneering work and educational focus in the realm of object detection technologies.
195
+
196
+ For more information about Kashmir World Foundation's activities, you can visit their [website](https://www.kashmirworldfoundation.org/).
docs/en/guides/region-counting.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Object Counting in Different Region using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Object Counting, Object Tracking, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # Object Counting in Different Regions using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Object Counting in Regions?
10
+
11
+ [Object counting](https://docs.ultralytics.com/guides/object-counting/) in regions with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) involves precisely determining the number of objects within specified areas using advanced computer vision. This approach is valuable for optimizing processes, enhancing security, and improving efficiency in various applications.
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/okItf1iHlV8"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Ultralytics YOLOv8 Object Counting in Multiple & Movable Regions
22
+ </p>
23
+
24
+ ## Advantages of Object Counting in Regions?
25
+
26
+ - **Precision and Accuracy:** Object counting in regions with advanced computer vision ensures precise and accurate counts, minimizing errors often associated with manual counting.
27
+ - **Efficiency Improvement:** Automated object counting enhances operational efficiency, providing real-time results and streamlining processes across different applications.
28
+ - **Versatility and Application:** The versatility of object counting in regions makes it applicable across various domains, from manufacturing and surveillance to traffic monitoring, contributing to its widespread utility and effectiveness.
29
+
30
+ ## Real World Applications
31
+
32
+ | Retail | Market Streets |
33
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
34
+ | ![People Counting in Different Region using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/5ab3bbd7-fd12-4849-928e-5f294d6c3fcf) | ![Crowd Counting in Different Region using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/e7c1aea7-474d-4d78-8d48-b50854ffe1ca) |
35
+ | People Counting in Different Region using Ultralytics YOLOv8 | Crowd Counting in Different Region using Ultralytics YOLOv8 |
36
+
37
+ ## Steps to Run
38
+
39
+ ### Step 1: Install Required Libraries
40
+
41
+ Begin by cloning the Ultralytics repository, installing dependencies, and navigating to the local directory using the provided commands in Step 2.
42
+
43
+ ```bash
44
+ # Clone Ultralytics repo
45
+ git clone https://github.com/ultralytics/ultralytics
46
+
47
+ # Navigate to the local directory
48
+ cd ultralytics/examples/YOLOv8-Region-Counter
49
+ ```
50
+
51
+ ### Step 2: Run Region Counting Using Ultralytics YOLOv8
52
+
53
+ Execute the following basic commands for inference.
54
+
55
+ ???+ tip "Region is Movable"
56
+
57
+ During video playback, you can interactively move the region within the video by clicking and dragging using the left mouse button.
58
+
59
+ ```bash
60
+ # Save results
61
+ python yolov8_region_counter.py --source "path/to/video.mp4" --save-img
62
+
63
+ # Run model on CPU
64
+ python yolov8_region_counter.py --source "path/to/video.mp4" --device cpu
65
+
66
+ # Change model file
67
+ python yolov8_region_counter.py --source "path/to/video.mp4" --weights "path/to/model.pt"
68
+
69
+ # Detect specific classes (e.g., first and third classes)
70
+ python yolov8_region_counter.py --source "path/to/video.mp4" --classes 0 2
71
+
72
+ # View results without saving
73
+ python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
74
+ ```
75
+
76
+ ### Optional Arguments
77
+
78
+ | Name | Type | Default | Description |
79
+ |----------------------|--------|--------------|--------------------------------------------|
80
+ | `--source` | `str` | `None` | Path to video file, for webcam 0 |
81
+ | `--line_thickness` | `int` | `2` | Bounding Box thickness |
82
+ | `--save-img` | `bool` | `False` | Save the predicted video/image |
83
+ | `--weights` | `str` | `yolov8n.pt` | Weights file path |
84
+ | `--classes` | `list` | `None` | Detect specific classes i.e. --classes 0 2 |
85
+ | `--region-thickness` | `int` | `2` | Region Box thickness |
86
+ | `--track-thickness` | `int` | `2` | Tracking line thickness |
docs/en/guides/sahi-tiled-inference.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: A comprehensive guide on how to use YOLOv8 with SAHI for standard and sliced inference in object detection tasks.
4
+ keywords: YOLOv8, SAHI, Sliced Inference, Object Detection, Ultralytics, Large Scale Image Analysis, High-Resolution Imagery
5
+ ---
6
+
7
+ # Ultralytics Docs: Using YOLOv8 with SAHI for Sliced Inference
8
+
9
+ Welcome to the Ultralytics documentation on how to use YOLOv8 with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference). This comprehensive guide aims to furnish you with all the essential knowledge you'll need to implement SAHI alongside YOLOv8. We'll deep-dive into what SAHI is, why sliced inference is critical for large-scale applications, and how to integrate these functionalities with YOLOv8 for enhanced object detection performance.
10
+
11
+ <p align="center">
12
+ <img width="1024" src="https://raw.githubusercontent.com/obss/sahi/main/resources/sliced_inference.gif" alt="SAHI Sliced Inference Overview">
13
+ </p>
14
+
15
+ ## Introduction to SAHI
16
+
17
+ SAHI (Slicing Aided Hyper Inference) is an innovative library designed to optimize object detection algorithms for large-scale and high-resolution imagery. Its core functionality lies in partitioning images into manageable slices, running object detection on each slice, and then stitching the results back together. SAHI is compatible with a range of object detection models, including the YOLO series, thereby offering flexibility while ensuring optimized use of computational resources.
18
+
19
+ ### Key Features of SAHI
20
+
21
+ - **Seamless Integration**: SAHI integrates effortlessly with YOLO models, meaning you can start slicing and detecting without a lot of code modification.
22
+ - **Resource Efficiency**: By breaking down large images into smaller parts, SAHI optimizes the memory usage, allowing you to run high-quality detection on hardware with limited resources.
23
+ - **High Accuracy**: SAHI maintains the detection accuracy by employing smart algorithms to merge overlapping detection boxes during the stitching process.
24
+
25
+ ## What is Sliced Inference?
26
+
27
+ Sliced Inference refers to the practice of subdividing a large or high-resolution image into smaller segments (slices), conducting object detection on these slices, and then recompiling the slices to reconstruct the object locations on the original image. This technique is invaluable in scenarios where computational resources are limited or when working with extremely high-resolution images that could otherwise lead to memory issues.
28
+
29
+ ### Benefits of Sliced Inference
30
+
31
+ - **Reduced Computational Burden**: Smaller image slices are faster to process, and they consume less memory, enabling smoother operation on lower-end hardware.
32
+
33
+ - **Preserved Detection Quality**: Since each slice is treated independently, there is no reduction in the quality of object detection, provided the slices are large enough to capture the objects of interest.
34
+
35
+ - **Enhanced Scalability**: The technique allows for object detection to be more easily scaled across different sizes and resolutions of images, making it ideal for a wide range of applications from satellite imagery to medical diagnostics.
36
+
37
+ <table border="0">
38
+ <tr>
39
+ <th>YOLOv8 without SAHI</th>
40
+ <th>YOLOv8 with SAHI</th>
41
+ </tr>
42
+ <tr>
43
+ <td><img src="https://user-images.githubusercontent.com/26833433/266123241-260a9740-5998-4e9a-ad04-b39b7767e731.png" alt="YOLOv8 without SAHI" width="640"></td>
44
+ <td><img src="https://user-images.githubusercontent.com/26833433/266123245-55f696ad-ec74-4e71-9155-c211d693bb69.png" alt="YOLOv8 with SAHI" width="640"></td>
45
+ </tr>
46
+ </table>
47
+
48
+ ## Installation and Preparation
49
+
50
+ ### Installation
51
+
52
+ To get started, install the latest versions of SAHI and Ultralytics:
53
+
54
+ ```bash
55
+ pip install -U ultralytics sahi
56
+ ```
57
+
58
+ ### Import Modules and Download Resources
59
+
60
+ Here's how to import the necessary modules and download a YOLOv8 model and some test images:
61
+
62
+ ```python
63
+ from sahi.utils.yolov8 import download_yolov8s_model
64
+ from sahi import AutoDetectionModel
65
+ from sahi.utils.cv import read_image
66
+ from sahi.utils.file import download_from_url
67
+ from sahi.predict import get_prediction, get_sliced_prediction, predict
68
+ from pathlib import Path
69
+ from IPython.display import Image
70
+
71
+ # Download YOLOv8 model
72
+ yolov8_model_path = "models/yolov8s.pt"
73
+ download_yolov8s_model(yolov8_model_path)
74
+
75
+ # Download test images
76
+ download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/small-vehicles1.jpeg', 'demo_data/small-vehicles1.jpeg')
77
+ download_from_url('https://raw.githubusercontent.com/obss/sahi/main/demo/demo_data/terrain2.png', 'demo_data/terrain2.png')
78
+ ```
79
+
80
+ ## Standard Inference with YOLOv8
81
+
82
+ ### Instantiate the Model
83
+
84
+ You can instantiate a YOLOv8 model for object detection like this:
85
+
86
+ ```python
87
+ detection_model = AutoDetectionModel.from_pretrained(
88
+ model_type='yolov8',
89
+ model_path=yolov8_model_path,
90
+ confidence_threshold=0.3,
91
+ device="cpu", # or 'cuda:0'
92
+ )
93
+ ```
94
+
95
+ ### Perform Standard Prediction
96
+
97
+ Perform standard inference using an image path or a numpy image.
98
+
99
+ ```python
100
+ # With an image path
101
+ result = get_prediction("demo_data/small-vehicles1.jpeg", detection_model)
102
+
103
+ # With a numpy image
104
+ result = get_prediction(read_image("demo_data/small-vehicles1.jpeg"), detection_model)
105
+ ```
106
+
107
+ ### Visualize Results
108
+
109
+ Export and visualize the predicted bounding boxes and masks:
110
+
111
+ ```python
112
+ result.export_visuals(export_dir="demo_data/")
113
+ Image("demo_data/prediction_visual.png")
114
+ ```
115
+
116
+ ## Sliced Inference with YOLOv8
117
+
118
+ Perform sliced inference by specifying the slice dimensions and overlap ratios:
119
+
120
+ ```python
121
+ result = get_sliced_prediction(
122
+ "demo_data/small-vehicles1.jpeg",
123
+ detection_model,
124
+ slice_height=256,
125
+ slice_width=256,
126
+ overlap_height_ratio=0.2,
127
+ overlap_width_ratio=0.2
128
+ )
129
+ ```
130
+
131
+ ## Handling Prediction Results
132
+
133
+ SAHI provides a `PredictionResult` object, which can be converted into various annotation formats:
134
+
135
+ ```python
136
+ # Access the object prediction list
137
+ object_prediction_list = result.object_prediction_list
138
+
139
+ # Convert to COCO annotation, COCO prediction, imantics, and fiftyone formats
140
+ result.to_coco_annotations()[:3]
141
+ result.to_coco_predictions(image_id=1)[:3]
142
+ result.to_imantics_annotations()[:3]
143
+ result.to_fiftyone_detections()[:3]
144
+ ```
145
+
146
+ ## Batch Prediction
147
+
148
+ For batch prediction on a directory of images:
149
+
150
+ ```python
151
+ predict(
152
+ model_type="yolov8",
153
+ model_path="path/to/yolov8n.pt",
154
+ model_device="cpu", # or 'cuda:0'
155
+ model_confidence_threshold=0.4,
156
+ source="path/to/dir",
157
+ slice_height=256,
158
+ slice_width=256,
159
+ overlap_height_ratio=0.2,
160
+ overlap_width_ratio=0.2,
161
+ )
162
+ ```
163
+
164
+ That's it! Now you're equipped to use YOLOv8 with SAHI for both standard and sliced inference.
165
+
166
+ ## Citations and Acknowledgments
167
+
168
+ If you use SAHI in your research or development work, please cite the original SAHI paper and acknowledge the authors:
169
+
170
+ !!! Quote ""
171
+
172
+ === "BibTeX"
173
+
174
+ ```bibtex
175
+ @article{akyon2022sahi,
176
+ title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection},
177
+ author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin},
178
+ journal={2022 IEEE International Conference on Image Processing (ICIP)},
179
+ doi={10.1109/ICIP46576.2022.9897990},
180
+ pages={966-970},
181
+ year={2022}
182
+ }
183
+ ```
184
+
185
+ We extend our thanks to the SAHI research group for creating and maintaining this invaluable resource for the computer vision community. For more information about SAHI and its creators, visit the [SAHI GitHub repository](https://github.com/obss/sahi).
docs/en/guides/security-alarm-system.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Security Alarm System Project Using Ultralytics YOLOv8. Learn How to implement a Security Alarm System Using ultralytics YOLOv8
4
+ keywords: Object Detection, Security Alarm, Object Tracking, YOLOv8, Computer Vision Projects
5
+ ---
6
+
7
+ # Security Alarm System Project Using Ultralytics YOLOv8
8
+
9
+ <img src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/f4e4a613-fb25-4bd0-9ec5-78352ddb62bd" alt="Security Alarm System">
10
+
11
+ The Security Alarm System Project utilizing Ultralytics YOLOv8 integrates advanced computer vision capabilities to enhance security measures. YOLOv8, developed by Ultralytics, provides real-time object detection, allowing the system to identify and respond to potential security threats promptly. This project offers several advantages:
12
+
13
+ - **Real-time Detection:** YOLOv8's efficiency enables the Security Alarm System to detect and respond to security incidents in real-time, minimizing response time.
14
+ - **Accuracy:** YOLOv8 is known for its accuracy in object detection, reducing false positives and enhancing the reliability of the security alarm system.
15
+ - **Integration Capabilities:** The project can be seamlessly integrated with existing security infrastructure, providing an upgraded layer of intelligent surveillance.
16
+
17
+ <p align="center">
18
+ <br>
19
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/_1CmwUzoxY4"
20
+ title="YouTube video player" frameborder="0"
21
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
22
+ allowfullscreen>
23
+ </iframe>
24
+ <br>
25
+ <strong>Watch:</strong> Security Alarm System Project with Ultralytics YOLOv8 Object Detection
26
+ </p>
27
+
28
+ ### Code
29
+
30
+ #### Import Libraries
31
+
32
+ ```python
33
+ import torch
34
+ import numpy as np
35
+ import cv2
36
+ from time import time
37
+ from ultralytics import YOLO
38
+ from ultralytics.utils.plotting import Annotator, colors
39
+ import smtplib
40
+ from email.mime.multipart import MIMEMultipart
41
+ from email.mime.text import MIMEText
42
+ ```
43
+
44
+ #### Set up the parameters of the message
45
+
46
+ ???+ tip "Note"
47
+
48
+ App Password Generation is necessary
49
+
50
+ - Navigate to [App Password Generator](https://myaccount.google.com/apppasswords), designate an app name such as "security project," and obtain a 16-digit password. Copy this password and paste it into the designated password field as instructed.
51
+
52
+ ```python
53
+ password = ""
54
+ from_email = "" # must match the email used to generate the password
55
+ to_email = "" # receiver email
56
+ ```
57
+
58
+ #### Server creation and authentication
59
+
60
+ ```python
61
+ server = smtplib.SMTP('smtp.gmail.com: 587')
62
+ server.starttls()
63
+ server.login(from_email, password)
64
+ ```
65
+
66
+ #### Email Send Function
67
+
68
+ ```python
69
+ def send_email(to_email, from_email, object_detected=1):
70
+ message = MIMEMultipart()
71
+ message['From'] = from_email
72
+ message['To'] = to_email
73
+ message['Subject'] = "Security Alert"
74
+ # Add in the message body
75
+ message_body = f'ALERT - {object_detected} objects has been detected!!'
76
+
77
+ message.attach(MIMEText(message_body, 'plain'))
78
+ server.sendmail(from_email, to_email, message.as_string())
79
+ ```
80
+
81
+ #### Object Detection and Alert Sender
82
+
83
+ ```python
84
+ class ObjectDetection:
85
+ def __init__(self, capture_index):
86
+ # default parameters
87
+ self.capture_index = capture_index
88
+ self.email_sent = False
89
+
90
+ # model information
91
+ self.model = YOLO("yolov8n.pt")
92
+
93
+ # visual information
94
+ self.annotator = None
95
+ self.start_time = 0
96
+ self.end_time = 0
97
+
98
+ # device information
99
+ self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
100
+
101
+ def predict(self, im0):
102
+ results = self.model(im0)
103
+ return results
104
+
105
+ def display_fps(self, im0):
106
+ self.end_time = time()
107
+ fps = 1 / np.round(self.end_time - self.start_time, 2)
108
+ text = f'FPS: {int(fps)}'
109
+ text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.0, 2)[0]
110
+ gap = 10
111
+ cv2.rectangle(im0, (20 - gap, 70 - text_size[1] - gap), (20 + text_size[0] + gap, 70 + gap), (255, 255, 255), -1)
112
+ cv2.putText(im0, text, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 2)
113
+
114
+ def plot_bboxes(self, results, im0):
115
+ class_ids = []
116
+ self.annotator = Annotator(im0, 3, results[0].names)
117
+ boxes = results[0].boxes.xyxy.cpu()
118
+ clss = results[0].boxes.cls.cpu().tolist()
119
+ names = results[0].names
120
+ for box, cls in zip(boxes, clss):
121
+ class_ids.append(cls)
122
+ self.annotator.box_label(box, label=names[int(cls)], color=colors(int(cls), True))
123
+ return im0, class_ids
124
+
125
+ def __call__(self):
126
+ cap = cv2.VideoCapture(self.capture_index)
127
+ assert cap.isOpened()
128
+ cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
129
+ cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
130
+ frame_count = 0
131
+ while True:
132
+ self.start_time = time()
133
+ ret, im0 = cap.read()
134
+ assert ret
135
+ results = self.predict(im0)
136
+ im0, class_ids = self.plot_bboxes(results, im0)
137
+
138
+ if len(class_ids) > 0: # Only send email If not sent before
139
+ if not self.email_sent:
140
+ send_email(to_email, from_email, len(class_ids))
141
+ self.email_sent = True
142
+ else:
143
+ self.email_sent = False
144
+
145
+ self.display_fps(im0)
146
+ cv2.imshow('YOLOv8 Detection', im0)
147
+ frame_count += 1
148
+ if cv2.waitKey(5) & 0xFF == 27:
149
+ break
150
+ cap.release()
151
+ cv2.destroyAllWindows()
152
+ server.quit()
153
+ ```
154
+
155
+ #### Call the Object Detection class and Run the Inference
156
+
157
+ ```python
158
+ detector = ObjectDetection(capture_index=0)
159
+ detector()
160
+ ```
161
+
162
+ That's it! When you execute the code, you'll receive a single notification on your email if any object is detected. The notification is sent immediately, not repeatedly. However, feel free to customize the code to suit your project requirements.
163
+
164
+ #### Email Received Sample
165
+
166
+ <img width="256" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/db79ccc6-aabd-4566-a825-b34e679c90f9" alt="Email Received Sample">
docs/en/guides/speed-estimation.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Speed Estimation Using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Speed Estimation, Object Tracking, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # Speed Estimation using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is Speed Estimation?
10
+
11
+ Speed estimation is the process of calculating the rate of movement of an object within a given context, often employed in computer vision applications. Using [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) you can now calculate the speed of object using [object tracking](https://docs.ultralytics.com/modes/track/) alongside distance and time data, crucial for tasks like traffic and surveillance. The accuracy of speed estimation directly influences the efficiency and reliability of various applications, making it a key component in the advancement of intelligent systems and real-time decision-making processes.
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/rCggzXRRSRo"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Speed Estimation using Ultralytics YOLOv8
22
+ </p>
23
+
24
+ ## Advantages of Speed Estimation?
25
+
26
+ - **Efficient Traffic Control:** Accurate speed estimation aids in managing traffic flow, enhancing safety, and reducing congestion on roadways.
27
+ - **Precise Autonomous Navigation:** In autonomous systems like self-driving cars, reliable speed estimation ensures safe and accurate vehicle navigation.
28
+ - **Enhanced Surveillance Security:** Speed estimation in surveillance analytics helps identify unusual behaviors or potential threats, improving the effectiveness of security measures.
29
+
30
+ ## Real World Applications
31
+
32
+ | Transportation | Transportation |
33
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------:|
34
+ | ![Speed Estimation on Road using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/c8a0fd4a-d394-436d-8de3-d5b754755fc7) | ![Speed Estimation on Bridge using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cee10e02-b268-4304-b73a-5b9cb42da669) |
35
+ | Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
36
+
37
+ !!! Example "Speed Estimation using YOLOv8 Example"
38
+
39
+ === "Speed Estimation"
40
+
41
+ ```python
42
+ from ultralytics import YOLO
43
+ from ultralytics.solutions import speed_estimation
44
+ import cv2
45
+
46
+ model = YOLO("yolov8n.pt")
47
+ names = model.model.names
48
+
49
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
50
+ assert cap.isOpened(), "Error reading video file"
51
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
52
+
53
+ # Video writer
54
+ video_writer = cv2.VideoWriter("speed_estimation.avi",
55
+ cv2.VideoWriter_fourcc(*'mp4v'),
56
+ fps,
57
+ (w, h))
58
+
59
+ line_pts = [(0, 360), (1280, 360)]
60
+
61
+ # Init speed-estimation obj
62
+ speed_obj = speed_estimation.SpeedEstimator()
63
+ speed_obj.set_args(reg_pts=line_pts,
64
+ names=names,
65
+ view_img=True)
66
+
67
+ while cap.isOpened():
68
+
69
+ success, im0 = cap.read()
70
+ if not success:
71
+ print("Video frame is empty or video processing has been successfully completed.")
72
+ break
73
+
74
+ tracks = model.track(im0, persist=True, show=False)
75
+
76
+ im0 = speed_obj.estimate_speed(im0, tracks)
77
+ video_writer.write(im0)
78
+
79
+ cap.release()
80
+ video_writer.release()
81
+ cv2.destroyAllWindows()
82
+
83
+ ```
84
+
85
+ ???+ warning "Speed is Estimate"
86
+
87
+ Speed will be an estimate and may not be completely accurate. Additionally, the estimation can vary depending on GPU speed.
88
+
89
+ ### Optional Arguments `set_args`
90
+
91
+ | Name | Type | Default | Description |
92
+ |--------------------|--------|----------------------------|---------------------------------------------------|
93
+ | `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
94
+ | `names` | `dict` | `None` | Classes names |
95
+ | `view_img` | `bool` | `False` | Display frames with counts |
96
+ | `line_thickness` | `int` | `2` | Increase bounding boxes thickness |
97
+ | `region_thickness` | `int` | `5` | Thickness for object counter region or line |
98
+ | `spdl_dist_thresh` | `int` | `10` | Euclidean Distance threshold for speed check line |
99
+
100
+ ### Arguments `model.track`
101
+
102
+ | Name | Type | Default | Description |
103
+ |-----------|---------|----------------|-------------------------------------------------------------|
104
+ | `source` | `im0` | `None` | source directory for images or videos |
105
+ | `persist` | `bool` | `False` | persisting tracks between frames |
106
+ | `tracker` | `str` | `botsort.yaml` | Tracking method 'bytetrack' or 'botsort' |
107
+ | `conf` | `float` | `0.3` | Confidence Threshold |
108
+ | `iou` | `float` | `0.5` | IOU Threshold |
109
+ | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
110
+ | `verbose` | `bool` | `True` | Display the object tracking results |
docs/en/guides/triton-inference-server.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: A step-by-step guide on integrating Ultralytics YOLOv8 with Triton Inference Server for scalable and high-performance deep learning inference deployments.
4
+ keywords: YOLOv8, Triton Inference Server, ONNX, Deep Learning Deployment, Scalable Inference, Ultralytics, NVIDIA, Object Detection, Cloud Inference
5
+ ---
6
+
7
+ # Triton Inference Server with Ultralytics YOLOv8
8
+
9
+ The [Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLOv8 with Triton Inference Server allows you to deploy scalable, high-performance deep learning inference workloads. This guide provides steps to set up and test the integration.
10
+
11
+ <p align="center">
12
+ <br>
13
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/NQDtfSi5QF4"
14
+ title="Getting Started with NVIDIA Triton Inference Server" frameborder="0"
15
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
16
+ allowfullscreen>
17
+ </iframe>
18
+ <br>
19
+ <strong>Watch:</strong> Getting Started with NVIDIA Triton Inference Server.
20
+ </p>
21
+
22
+ ## What is Triton Inference Server?
23
+
24
+ Triton Inference Server is designed to deploy a variety of AI models in production. It supports a wide range of deep learning and machine learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, and many others. Its primary use cases are:
25
+
26
+ - Serving multiple models from a single server instance.
27
+ - Dynamic model loading and unloading without server restart.
28
+ - Ensemble inference, allowing multiple models to be used together to achieve results.
29
+ - Model versioning for A/B testing and rolling updates.
30
+
31
+ ## Prerequisites
32
+
33
+ Ensure you have the following prerequisites before proceeding:
34
+
35
+ - Docker installed on your machine.
36
+ - Install `tritonclient`:
37
+ ```bash
38
+ pip install tritonclient[all]
39
+ ```
40
+
41
+ ## Exporting YOLOv8 to ONNX Format
42
+
43
+ Before deploying the model on Triton, it must be exported to the ONNX format. ONNX (Open Neural Network Exchange) is a format that allows models to be transferred between different deep learning frameworks. Use the `export` function from the `YOLO` class:
44
+
45
+ ```python
46
+ from ultralytics import YOLO
47
+
48
+ # Load a model
49
+ model = YOLO('yolov8n.pt') # load an official model
50
+
51
+ # Export the model
52
+ onnx_file = model.export(format='onnx', dynamic=True)
53
+ ```
54
+
55
+ ## Setting Up Triton Model Repository
56
+
57
+ The Triton Model Repository is a storage location where Triton can access and load models.
58
+
59
+ 1. Create the necessary directory structure:
60
+
61
+ ```python
62
+ from pathlib import Path
63
+
64
+ # Define paths
65
+ triton_repo_path = Path('tmp') / 'triton_repo'
66
+ triton_model_path = triton_repo_path / 'yolo'
67
+
68
+ # Create directories
69
+ (triton_model_path / '1').mkdir(parents=True, exist_ok=True)
70
+ ```
71
+
72
+ 2. Move the exported ONNX model to the Triton repository:
73
+
74
+ ```python
75
+ from pathlib import Path
76
+
77
+ # Move ONNX model to Triton Model path
78
+ Path(onnx_file).rename(triton_model_path / '1' / 'model.onnx')
79
+
80
+ # Create config file
81
+ (triton_model_path / 'config.pbtxt').touch()
82
+ ```
83
+
84
+ ## Running Triton Inference Server
85
+
86
+ Run the Triton Inference Server using Docker:
87
+
88
+ ```python
89
+ import subprocess
90
+ import time
91
+
92
+ from tritonclient.http import InferenceServerClient
93
+
94
+ # Define image https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
95
+ tag = 'nvcr.io/nvidia/tritonserver:23.09-py3' # 6.4 GB
96
+
97
+ # Pull the image
98
+ subprocess.call(f'docker pull {tag}', shell=True)
99
+
100
+ # Run the Triton server and capture the container ID
101
+ container_id = subprocess.check_output(
102
+ f'docker run -d --rm -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models',
103
+ shell=True).decode('utf-8').strip()
104
+
105
+ # Wait for the Triton server to start
106
+ triton_client = InferenceServerClient(url='localhost:8000', verbose=False, ssl=False)
107
+
108
+ # Wait until model is ready
109
+ for _ in range(10):
110
+ with contextlib.suppress(Exception):
111
+ assert triton_client.is_model_ready(model_name)
112
+ break
113
+ time.sleep(1)
114
+ ```
115
+
116
+ Then run inference using the Triton Server model:
117
+
118
+ ```python
119
+ from ultralytics import YOLO
120
+
121
+ # Load the Triton Server model
122
+ model = YOLO(f'http://localhost:8000/yolo', task='detect')
123
+
124
+ # Run inference on the server
125
+ results = model('path/to/image.jpg')
126
+ ```
127
+
128
+ Cleanup the container:
129
+
130
+ ```python
131
+ # Kill and remove the container at the end of the test
132
+ subprocess.call(f'docker kill {container_id}', shell=True)
133
+ ```
134
+
135
+ ---
136
+
137
+ By following the above steps, you can deploy and run Ultralytics YOLOv8 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
docs/en/guides/view-results-in-terminal.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Learn how to view image results inside a compatible VSCode terminal.
4
+ keywords: YOLOv8, VSCode, Terminal, Remote Development, Ultralytics, SSH, Object Detection, Inference, Results, Remote Tunnel, Images, Helpful, Productivity Hack
5
+ ---
6
+
7
+ # Viewing Inference Results in a Terminal
8
+
9
+ <p align="center">
10
+ <img width="800" src="https://raw.githubusercontent.com/saitoha/libsixel/data/data/sixel.gif" alt="Sixel example of image in Terminal">
11
+ </p>
12
+
13
+ Image from the [libsixel](https://saitoha.github.io/libsixel/) website.
14
+
15
+ ## Motivation
16
+
17
+ When connecting to a remote machine, normally visualizing image results is not possible or requires moving data to a local device with a GUI. The VSCode integrated terminal allows for directly rendering images. This is a short demonstration on how to use this in conjunction with `ultralytics` with [prediction results](../modes/predict.md).
18
+
19
+ !!! warning
20
+
21
+ Only compatible with Linux and MacOS. Check the VSCode [repository](https://github.com/microsoft/vscode), check [Issue status](https://github.com/microsoft/vscode/issues/198622), or [documentation](https://code.visualstudio.com/docs) for updates about Windows support to view images in terminal with `sixel`.
22
+
23
+ The VSCode compatible protocols for viewing images using the integrated terminal are [`sixel`](https://en.wikipedia.org/wiki/Sixel) and [`iTerm`](https://iterm2.com/documentation-images.html). This guide will demonstrate use of the `sixel` protocol.
24
+
25
+ ## Process
26
+
27
+ 1. First, you must enable settings `terminal.integrated.enableImages` and `terminal.integrated.gpuAcceleration` in VSCode.
28
+
29
+ ```yaml
30
+ "terminal.integrated.gpuAcceleration": "auto" # "auto" is default, can also use "on"
31
+ "terminal.integrated.enableImages": false
32
+ ```
33
+
34
+ <p align="center">
35
+ <img width="800" src="https://github.com/ultralytics/ultralytics/assets/62214284/d158ab1c-893c-4397-a5de-2f9f74f81175" alt="VSCode enable terminal images setting">
36
+ </p>
37
+
38
+ 1. Install the `python-sixel` library in your virtual environment. This is a [fork](https://github.com/lubosz/python-sixel?tab=readme-ov-file) of the `PySixel` library, which is no longer maintained.
39
+
40
+ ```bash
41
+ pip install sixel
42
+ ```
43
+
44
+ 1. Import the relevant libraries
45
+
46
+ ```py
47
+ import io
48
+
49
+ import cv2 as cv
50
+
51
+ from ultralytics import YOLO
52
+ from sixel import SixelWriter
53
+ ```
54
+
55
+ 1. Load a model and execute inference, then plot the results and store in a variable. See more about inference arguments and working with results on the [predict mode](../modes/predict.md) page.
56
+
57
+ ```{ .py .annotate }
58
+ from ultralytics import YOLO
59
+
60
+ # Load a model
61
+ model = YOLO("yolov8n.pt")
62
+
63
+ # Run inference on an image
64
+ results = model.predict(source="ultralytics/assets/bus.jpg")
65
+
66
+ # Plot inference results
67
+ plot = results[0].plot() #(1)!
68
+ ```
69
+
70
+ 1. See [plot method parameters](../modes/predict.md#plot-method-parameters) to see possible arguments to use.
71
+
72
+ 1. Now, use OpenCV to convert the `numpy.ndarray` to `bytes` data. Then use `io.BytesIO` to make a "file-like" object.
73
+
74
+ ```{ .py .annotate }
75
+ # Results image as bytes
76
+ im_bytes = cv.imencode(
77
+ ".png", #(1)!
78
+ plot,
79
+ )[1].tobytes() #(2)!
80
+
81
+ # Image bytes as a file-like object
82
+ mem_file = io.BytesIO(im_bytes)
83
+ ```
84
+
85
+ 1. It's possible to use other image extensions as well.
86
+ 2. Only the object at index `1` that is returned is needed.
87
+
88
+ 1. Create a `SixelWriter` instance, and then use the `.draw()` method to draw the image in the terminal.
89
+
90
+ ```py
91
+ # Create sixel writer object
92
+ w = SixelWriter()
93
+
94
+ # Draw the sixel image in the terminal
95
+ w.draw(mem_file)
96
+ ```
97
+
98
+ ## Example Inference Results
99
+
100
+ <p align="center">
101
+ <img width="800" src="https://github.com/ultralytics/ultralytics/assets/62214284/6743ab64-300d-4429-bdce-e246455f7b68" alt="View Image in Terminal">
102
+ </p>
103
+
104
+ !!! danger
105
+
106
+ Using this example with videos or animated GIF frames has **not** been tested. Attempt at your own risk.
107
+
108
+ ## Full Code Example
109
+
110
+ ```{ .py .annotate }
111
+ import io
112
+
113
+ import cv2 as cv
114
+
115
+ from ultralytics import YOLO
116
+ from sixel import SixelWriter
117
+
118
+ # Load a model
119
+ model = YOLO("yolov8n.pt")
120
+
121
+ # Run inference on an image
122
+ results = model.predict(source="ultralytics/assets/bus.jpg")
123
+
124
+ # Plot inference results
125
+ plot = results[0].plot() #(3)!
126
+
127
+ # Results image as bytes
128
+ im_bytes = cv.imencode(
129
+ ".png", #(1)!
130
+ plot,
131
+ )[1].tobytes() #(2)!
132
+
133
+ mem_file = io.BytesIO(im_bytes)
134
+ w = SixelWriter()
135
+ w.draw(mem_file)
136
+ ```
137
+
138
+ 1. It's possible to use other image extensions as well.
139
+ 2. Only the object at index `1` that is returned is needed.
140
+ 3. See [plot method parameters](../modes/predict.md#plot-method-parameters) to see possible arguments to use.
141
+
142
+ ---
143
+
144
+ !!! tip
145
+
146
+ You may need to use `clear` to "erase" the view of the image in the terminal.
docs/en/guides/vision-eye.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: VisionEye View Object Mapping using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Object Tracking, IDetection, VisionEye, Computer Vision, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # VisionEye View Object Mapping using Ultralytics YOLOv8 🚀
8
+
9
+ ## What is VisionEye Object Mapping?
10
+
11
+ [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) VisionEye offers the capability for computers to identify and pinpoint objects, simulating the observational precision of the human eye. This functionality enables computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint.
12
+
13
+ ## Samples
14
+
15
+ | VisionEye View | VisionEye View With Object Tracking | VisionEye View With Distance Calculation |
16
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
17
+ | ![VisionEye View Object Mapping using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d593acc-2e37-41b0-ad0e-92b4ffae6647) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8](https://github.com/RizwanMunawar/ultralytics/assets/62513924/fcd85952-390f-451e-8fb0-b82e943af89c) | ![VisionEye View with Distance Calculation using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/18c4dafe-a22e-4fa9-a7d4-2bb293562a95) |
18
+ | VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 |
19
+
20
+ !!! Example "VisionEye Object Mapping using YOLOv8"
21
+
22
+ === "VisionEye Object Mapping"
23
+
24
+ ```python
25
+ import cv2
26
+ from ultralytics import YOLO
27
+ from ultralytics.utils.plotting import colors, Annotator
28
+
29
+ model = YOLO("yolov8n.pt")
30
+ names = model.model.names
31
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
32
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
33
+
34
+ out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
35
+
36
+ center_point = (-10, h)
37
+
38
+ while True:
39
+ ret, im0 = cap.read()
40
+ if not ret:
41
+ print("Video frame is empty or video processing has been successfully completed.")
42
+ break
43
+
44
+ results = model.predict(im0)
45
+ boxes = results[0].boxes.xyxy.cpu()
46
+ clss = results[0].boxes.cls.cpu().tolist()
47
+
48
+ annotator = Annotator(im0, line_width=2)
49
+
50
+ for box, cls in zip(boxes, clss):
51
+ annotator.box_label(box, label=names[int(cls)], color=colors(int(cls)))
52
+ annotator.visioneye(box, center_point)
53
+
54
+ out.write(im0)
55
+ cv2.imshow("visioneye-pinpoint", im0)
56
+
57
+ if cv2.waitKey(1) & 0xFF == ord('q'):
58
+ break
59
+
60
+ out.release()
61
+ cap.release()
62
+ cv2.destroyAllWindows()
63
+ ```
64
+
65
+ === "VisionEye Object Mapping with Object Tracking"
66
+
67
+ ```python
68
+ import cv2
69
+ from ultralytics import YOLO
70
+ from ultralytics.utils.plotting import colors, Annotator
71
+
72
+ model = YOLO("yolov8n.pt")
73
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
74
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
75
+
76
+ out = cv2.VideoWriter('visioneye-pinpoint.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
77
+
78
+ center_point = (-10, h)
79
+
80
+ while True:
81
+ ret, im0 = cap.read()
82
+ if not ret:
83
+ print("Video frame is empty or video processing has been successfully completed.")
84
+ break
85
+
86
+ annotator = Annotator(im0, line_width=2)
87
+
88
+ results = model.track(im0, persist=True)
89
+ boxes = results[0].boxes.xyxy.cpu()
90
+
91
+ if results[0].boxes.id is not None:
92
+ track_ids = results[0].boxes.id.int().cpu().tolist()
93
+
94
+ for box, track_id in zip(boxes, track_ids):
95
+ annotator.box_label(box, label=str(track_id), color=colors(int(track_id)))
96
+ annotator.visioneye(box, center_point)
97
+
98
+ out.write(im0)
99
+ cv2.imshow("visioneye-pinpoint", im0)
100
+
101
+ if cv2.waitKey(1) & 0xFF == ord('q'):
102
+ break
103
+
104
+ out.release()
105
+ cap.release()
106
+ cv2.destroyAllWindows()
107
+ ```
108
+
109
+ === "VisionEye with Distance Calculation"
110
+
111
+ ```python
112
+ import cv2
113
+ import math
114
+ from ultralytics import YOLO
115
+ from ultralytics.utils.plotting import Annotator, colors
116
+
117
+ model = YOLO("yolov8s.pt")
118
+ cap = cv2.VideoCapture("Path/to/video/file.mp4")
119
+
120
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
121
+
122
+ out = cv2.VideoWriter('visioneye-distance-calculation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
123
+
124
+ center_point = (0, h)
125
+ pixel_per_meter = 10
126
+
127
+ txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))
128
+
129
+ while True:
130
+ ret, im0 = cap.read()
131
+ if not ret:
132
+ print("Video frame is empty or video processing has been successfully completed.")
133
+ break
134
+
135
+ annotator = Annotator(im0, line_width=2)
136
+
137
+ results = model.track(im0, persist=True)
138
+ boxes = results[0].boxes.xyxy.cpu()
139
+
140
+ if results[0].boxes.id is not None:
141
+ track_ids = results[0].boxes.id.int().cpu().tolist()
142
+
143
+ for box, track_id in zip(boxes, track_ids):
144
+ annotator.box_label(box, label=str(track_id), color=bbox_clr)
145
+ annotator.visioneye(box, center_point)
146
+
147
+ x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid
148
+
149
+ distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2))/pixel_per_meter
150
+
151
+ text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX,1.2, 3)
152
+ cv2.rectangle(im0, (x1, y1 - text_size[1] - 10),(x1 + text_size[0] + 10, y1), txt_background, -1)
153
+ cv2.putText(im0, f"Distance: {distance:.2f} m",(x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2,txt_color, 3)
154
+
155
+ out.write(im0)
156
+ cv2.imshow("visioneye-distance-calculation", im0)
157
+
158
+ if cv2.waitKey(1) & 0xFF == ord('q'):
159
+ break
160
+
161
+ out.release()
162
+ cap.release()
163
+ cv2.destroyAllWindows()
164
+ ```
165
+
166
+ ### `visioneye` Arguments
167
+
168
+ | Name | Type | Default | Description |
169
+ |---------------|---------|------------------|--------------------------------------------------|
170
+ | `color` | `tuple` | `(235, 219, 11)` | Line and object centroid color |
171
+ | `pin_color` | `tuple` | `(255, 0, 255)` | VisionEye pinpoint color |
172
+ | `thickness` | `int` | `2` | pinpoint to object line thickness |
173
+ | `pins_radius` | `int` | `10` | Pinpoint and object centroid point circle radius |
174
+
175
+ ## Note
176
+
177
+ For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below.
docs/en/guides/workouts-monitoring.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Workouts Monitoring Using Ultralytics YOLOv8
4
+ keywords: Ultralytics, YOLOv8, Object Detection, Pose Estimation, PushUps, PullUps, Ab workouts, Notebook, IPython Kernel, CLI, Python SDK
5
+ ---
6
+
7
+ # Workouts Monitoring using Ultralytics YOLOv8 🚀
8
+
9
+ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) enhances exercise assessment by accurately tracking key body landmarks and joints in real-time. This technology provides instant feedback on exercise form, tracks workout routines, and measures performance metrics, optimizing training sessions for users and trainers alike.
10
+
11
+ ## Advantages of Workouts Monitoring?
12
+
13
+ - **Optimized Performance:** Tailoring workouts based on monitoring data for better results.
14
+ - **Goal Achievement:** Track and adjust fitness goals for measurable progress.
15
+ - **Personalization:** Customized workout plans based on individual data for effectiveness.
16
+ - **Health Awareness:** Early detection of patterns indicating health issues or over-training.
17
+ - **Informed Decisions:** Data-driven decisions for adjusting routines and setting realistic goals.
18
+
19
+ ## Real World Applications
20
+
21
+ | Workouts Monitoring | Workouts Monitoring |
22
+ |:----------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------:|
23
+ | ![PushUps Counting](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cf016a41-589f-420f-8a8c-2cc8174a16de) | ![PullUps Counting](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cb20f316-fac2-4330-8445-dcf5ffebe329) |
24
+ | PushUps Counting | PullUps Counting |
25
+
26
+ !!! Example "Workouts Monitoring Example"
27
+
28
+ === "Workouts Monitoring"
29
+
30
+ ```python
31
+ from ultralytics import YOLO
32
+ from ultralytics.solutions import ai_gym
33
+ import cv2
34
+
35
+ model = YOLO("yolov8n-pose.pt")
36
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
37
+ assert cap.isOpened(), "Error reading video file"
38
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
39
+
40
+ gym_object = ai_gym.AIGym() # init AI GYM module
41
+ gym_object.set_args(line_thickness=2,
42
+ view_img=True,
43
+ pose_type="pushup",
44
+ kpts_to_check=[6, 8, 10])
45
+
46
+ frame_count = 0
47
+ while cap.isOpened():
48
+ success, im0 = cap.read()
49
+ if not success:
50
+ print("Video frame is empty or video processing has been successfully completed.")
51
+ break
52
+ frame_count += 1
53
+ results = model.track(im0, verbose=False) # Tracking recommended
54
+ #results = model.predict(im0) # Prediction also supported
55
+ im0 = gym_object.start_counting(im0, results, frame_count)
56
+
57
+ cv2.destroyAllWindows()
58
+ ```
59
+
60
+ === "Workouts Monitoring with Save Output"
61
+
62
+ ```python
63
+ from ultralytics import YOLO
64
+ from ultralytics.solutions import ai_gym
65
+ import cv2
66
+
67
+ model = YOLO("yolov8n-pose.pt")
68
+ cap = cv2.VideoCapture("path/to/video/file.mp4")
69
+ assert cap.isOpened(), "Error reading video file"
70
+ w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
71
+
72
+ video_writer = cv2.VideoWriter("workouts.avi",
73
+ cv2.VideoWriter_fourcc(*'mp4v'),
74
+ fps,
75
+ (w, h))
76
+
77
+ gym_object = ai_gym.AIGym() # init AI GYM module
78
+ gym_object.set_args(line_thickness=2,
79
+ view_img=True,
80
+ pose_type="pushup",
81
+ kpts_to_check=[6, 8, 10])
82
+
83
+ frame_count = 0
84
+ while cap.isOpened():
85
+ success, im0 = cap.read()
86
+ if not success:
87
+ print("Video frame is empty or video processing has been successfully completed.")
88
+ break
89
+ frame_count += 1
90
+ results = model.track(im0, verbose=False) # Tracking recommended
91
+ #results = model.predict(im0) # Prediction also supported
92
+ im0 = gym_object.start_counting(im0, results, frame_count)
93
+ video_writer.write(im0)
94
+
95
+ cv2.destroyAllWindows()
96
+ video_writer.release()
97
+ ```
98
+
99
+ ???+ tip "Support"
100
+
101
+ "pushup", "pullup" and "abworkout" supported
102
+
103
+ ### KeyPoints Map
104
+
105
+ ![keyPoints Order Ultralytics YOLOv8 Pose](https://github.com/ultralytics/ultralytics/assets/62513924/f45d8315-b59f-47b7-b9c8-c61af1ce865b)
106
+
107
+ ### Arguments `set_args`
108
+
109
+ | Name | Type | Default | Description |
110
+ |-------------------|--------|----------|----------------------------------------------------------------------------------------|
111
+ | `kpts_to_check` | `list` | `None` | List of three keypoints index, for counting specific workout, followed by keypoint Map |
112
+ | `view_img` | `bool` | `False` | Display the frame with counts |
113
+ | `line_thickness` | `int` | `2` | Increase the thickness of count value |
114
+ | `pose_type` | `str` | `pushup` | Pose that need to be monitored, `pullup` and `abworkout` also supported |
115
+ | `pose_up_angle` | `int` | `145` | Pose Up Angle value |
116
+ | `pose_down_angle` | `int` | `90` | Pose Down Angle value |
117
+
118
+ ### Arguments `model.predict`
119
+
120
+ | Name | Type | Default | Description |
121
+ |-----------------|----------------|------------------------|----------------------------------------------------------------------------|
122
+ | `source` | `str` | `'ultralytics/assets'` | source directory for images or videos |
123
+ | `conf` | `float` | `0.25` | object confidence threshold for detection |
124
+ | `iou` | `float` | `0.7` | intersection over union (IoU) threshold for NMS |
125
+ | `imgsz` | `int or tuple` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
126
+ | `half` | `bool` | `False` | use half precision (FP16) |
127
+ | `device` | `None or str` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
128
+ | `max_det` | `int` | `300` | maximum number of detections per image |
129
+ | `vid_stride` | `bool` | `False` | video frame-rate stride |
130
+ | `stream_buffer` | `bool` | `False` | buffer all streaming frames (True) or return the most recent frame (False) |
131
+ | `visualize` | `bool` | `False` | visualize model features |
132
+ | `augment` | `bool` | `False` | apply image augmentation to prediction sources |
133
+ | `agnostic_nms` | `bool` | `False` | class-agnostic NMS |
134
+ | `classes` | `list[int]` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
135
+ | `retina_masks` | `bool` | `False` | use high-resolution segmentation masks |
136
+ | `embed` | `list[int]` | `None` | return feature vectors/embeddings from given layers |
137
+
138
+ ### Arguments `model.track`
139
+
140
+ | Name | Type | Default | Description |
141
+ |-----------|---------|----------------|-------------------------------------------------------------|
142
+ | `source` | `im0` | `None` | source directory for images or videos |
143
+ | `persist` | `bool` | `False` | persisting tracks between frames |
144
+ | `tracker` | `str` | `botsort.yaml` | Tracking method 'bytetrack' or 'botsort' |
145
+ | `conf` | `float` | `0.3` | Confidence Threshold |
146
+ | `iou` | `float` | `0.5` | IOU Threshold |
147
+ | `classes` | `list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |
148
+ | `verbose` | `bool` | `True` | Display the object tracking results |
docs/en/guides/yolo-common-issues.md ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: A comprehensive guide to troubleshooting common issues encountered while working with YOLOv8 in the Ultralytics ecosystem.
4
+ keywords: Troubleshooting, Ultralytics, YOLOv8, Installation Errors, Training Data, Model Performance, Hyperparameter Tuning, Deployment
5
+ ---
6
+
7
+ # Troubleshooting Common YOLO Issues
8
+
9
+ <p align="center">
10
+ <img width="800" src="https://user-images.githubusercontent.com/26833433/273067258-7c1b9aee-b4e8-43b5-befd-588d4f0bd361.png" alt="YOLO Common Issues Image">
11
+ </p>
12
+
13
+ ## Introduction
14
+
15
+ This guide serves as a comprehensive aid for troubleshooting common issues encountered while working with YOLOv8 on your Ultralytics projects. Navigating through these issues can be a breeze with the right guidance, ensuring your projects remain on track without unnecessary delays.
16
+
17
+ ## Common Issues
18
+
19
+ ### Installation Errors
20
+
21
+ Installation errors can arise due to various reasons, such as incompatible versions, missing dependencies, or incorrect environment setups. First, check to make sure you are doing the following:
22
+
23
+ - You're using Python 3.8 or later as recommended.
24
+
25
+ - Ensure that you have the correct version of PyTorch (1.8 or later) installed.
26
+
27
+ - Consider using virtual environments to avoid conflicts.
28
+
29
+ - Follow the [official installation guide](../quickstart.md) step by step.
30
+
31
+ Additionally, here are some common installation issues users have encountered, along with their respective solutions:
32
+
33
+ - Import Errors or Dependency Issues - If you're getting errors during the import of YOLOv8, or you're having issues related to dependencies, consider the following troubleshooting steps:
34
+
35
+ - **Fresh Installation**: Sometimes, starting with a fresh installation can resolve unexpected issues. Especially with libraries like Ultralytics, where updates might introduce changes to the file tree structure or functionalities.
36
+
37
+ - **Update Regularly**: Ensure you're using the latest version of the library. Older versions might not be compatible with recent updates, leading to potential conflicts or issues.
38
+
39
+ - **Check Dependencies**: Verify that all required dependencies are correctly installed and are of the compatible versions.
40
+
41
+ - **Review Changes**: If you initially cloned or installed an older version, be aware that significant updates might affect the library's structure or functionalities. Always refer to the official documentation or changelogs to understand any major changes.
42
+
43
+ - Remember, keeping your libraries and dependencies up-to-date is crucial for a smooth and error-free experience.
44
+
45
+ - Running YOLOv8 on GPU - If you're having trouble running YOLOv8 on GPU, consider the following troubleshooting steps:
46
+
47
+ - **Verify CUDA Compatibility and Installation**: Ensure your GPU is CUDA compatible and that CUDA is correctly installed. Use the `nvidia-smi` command to check the status of your NVIDIA GPU and CUDA version.
48
+
49
+ - **Check PyTorch and CUDA Integration**: Ensure PyTorch can utilize CUDA by running `import torch; print(torch.cuda.is_available())` in a Python terminal. If it returns 'True', PyTorch is set up to use CUDA.
50
+
51
+ - **Environment Activation**: Ensure you're in the correct environment where all necessary packages are installed.
52
+
53
+ - **Update Your Packages**: Outdated packages might not be compatible with your GPU. Keep them updated.
54
+
55
+ - **Program Configuration**: Check if the program or code specifies GPU usage. In YOLOv8, this might be in the settings or configuration.
56
+
57
+ ### Model Training Issues
58
+
59
+ This section will address common issues faced while training and their respective explanations and solutions.
60
+
61
+ #### Verification of Configuration Settings
62
+
63
+ **Issue**: You are unsure whether the configuration settings in the `.yaml` file are being applied correctly during model training.
64
+
65
+ **Solution**: The configuration settings in the `.yaml` file should be applied when using the `model.train()` function. To ensure that these settings are correctly applied, follow these steps:
66
+
67
+ - Confirm that the path to your `.yaml` configuration file is correct.
68
+ - Make sure you pass the path to your `.yaml` file as the `data` argument when calling `model.train()`, as shown below:
69
+
70
+ ```python
71
+ model.train(data='/path/to/your/data.yaml', batch=4)
72
+ ```
73
+
74
+ #### Accelerating Training with Multiple GPUs
75
+
76
+ **Issue**: Training is slow on a single GPU, and you want to speed up the process using multiple GPUs.
77
+
78
+ **Solution**: Increasing the batch size can accelerate training, but it's essential to consider GPU memory capacity. To speed up training with multiple GPUs, follow these steps:
79
+
80
+ - Ensure that you have multiple GPUs available.
81
+
82
+ - Modify your .yaml configuration file to specify the number of GPUs to use, e.g., gpus: 4.
83
+
84
+ - Increase the batch size accordingly to fully utilize the multiple GPUs without exceeding memory limits.
85
+
86
+ - Modify your training command to utilize multiple GPUs:
87
+
88
+ ```python
89
+ # Adjust the batch size and other settings as needed to optimize training speed
90
+ model.train(data='/path/to/your/data.yaml', batch=32, multi_scale=True)
91
+ ```
92
+
93
+ #### Continuous Monitoring Parameters
94
+
95
+ **Issue**: You want to know which parameters should be continuously monitored during training, apart from loss.
96
+
97
+ **Solution**: While loss is a crucial metric to monitor, it's also essential to track other metrics for model performance optimization. Some key metrics to monitor during training include:
98
+
99
+ - Precision
100
+ - Recall
101
+ - Mean Average Precision (mAP)
102
+
103
+ You can access these metrics from the training logs or by using tools like TensorBoard or wandb for visualization. Implementing early stopping based on these metrics can help you achieve better results.
104
+
105
+ #### Tools for Tracking Training Progress
106
+
107
+ **Issue**: You are looking for recommendations on tools to track training progress.
108
+
109
+ **Solution**: To track and visualize training progress, you can consider using the following tools:
110
+
111
+ - [TensorBoard](https://www.tensorflow.org/tensorboard): TensorBoard is a popular choice for visualizing training metrics, including loss, accuracy, and more. You can integrate it with your YOLOv8 training process.
112
+ - [Comet](https://bit.ly/yolov8-readme-comet): Comet provides an extensive toolkit for experiment tracking and comparison. It allows you to track metrics, hyperparameters, and even model weights. Integration with YOLO models is also straightforward, providing you with a complete overview of your experiment cycle.
113
+ - [Ultralytics HUB](https://hub.ultralytics.com): Ultralytics HUB offers a specialized environment for tracking YOLO models, giving you a one-stop platform to manage metrics, datasets, and even collaborate with your team. Given its tailored focus on YOLO, it offers more customized tracking options.
114
+
115
+ Each of these tools offers its own set of advantages, so you may want to consider the specific needs of your project when making a choice.
116
+
117
+ #### How to Check if Training is Happening on the GPU
118
+
119
+ **Issue**: The 'device' value in the training logs is 'null,' and you're unsure if training is happening on the GPU.
120
+
121
+ **Solution**: The 'device' value being 'null' typically means that the training process is set to automatically use an available GPU, which is the default behavior. To ensure training occurs on a specific GPU, you can manually set the 'device' value to the GPU index (e.g., '0' for the first GPU) in your .yaml configuration file:
122
+
123
+ ```yaml
124
+ device: 0
125
+ ```
126
+
127
+ This will explicitly assign the training process to the specified GPU. If you wish to train on the CPU, set 'device' to 'cpu'.
128
+
129
+ Keep an eye on the 'runs' folder for logs and metrics to monitor training progress effectively.
130
+
131
+ #### Key Considerations for Effective Model Training
132
+
133
+ Here are some things to keep in mind, if you are facing issues related to model training.
134
+
135
+ **Dataset Format and Labels**
136
+
137
+ - Importance: The foundation of any machine learning model lies in the quality and format of the data it is trained on.
138
+
139
+ - Recommendation: Ensure that your custom dataset and its associated labels adhere to the expected format. It's crucial to verify that annotations are accurate and of high quality. Incorrect or subpar annotations can derail the model's learning process, leading to unpredictable outcomes.
140
+
141
+ **Model Convergence**
142
+
143
+ - Importance: Achieving model convergence ensures that the model has sufficiently learned from the training data.
144
+
145
+ - Recommendation: When training a model 'from scratch', it's vital to ensure that the model reaches a satisfactory level of convergence. This might necessitate a longer training duration, with more epochs, compared to when you're fine-tuning an existing model.
146
+
147
+ **Learning Rate and Batch Size**
148
+
149
+ - Importance: These hyperparameters play a pivotal role in determining how the model updates its weights during training.
150
+
151
+ - Recommendation: Regularly evaluate if the chosen learning rate and batch size are optimal for your specific dataset. Parameters that are not in harmony with the dataset's characteristics can hinder the model's performance.
152
+
153
+ **Class Distribution**
154
+
155
+ - Importance: The distribution of classes in your dataset can influence the model's prediction tendencies.
156
+
157
+ - Recommendation: Regularly assess the distribution of classes within your dataset. If there's a class imbalance, there's a risk that the model will develop a bias towards the more prevalent class. This bias can be evident in the confusion matrix, where the model might predominantly predict the majority class.
158
+
159
+ **Cross-Check with Pretrained Weights**
160
+
161
+ - Importance: Leveraging pretrained weights can provide a solid starting point for model training, especially when data is limited.
162
+
163
+ - Recommendation: As a diagnostic step, consider training your model using the same data but initializing it with pretrained weights. If this approach yields a well-formed confusion matrix, it could suggest that the 'from scratch' model might require further training or adjustments.
164
+
165
+ ### Issues Related to Model Predictions
166
+
167
+ This section will address common issues faced during model prediction.
168
+
169
+ #### Getting Bounding Box Predictions With Your YOLOv8 Custom Model
170
+
171
+ **Issue**: When running predictions with a custom YOLOv8 model, there are challenges with the format and visualization of the bounding box coordinates.
172
+
173
+ **Solution**:
174
+
175
+ - Coordinate Format: YOLOv8 provides bounding box coordinates in absolute pixel values. To convert these to relative coordinates (ranging from 0 to 1), you need to divide by the image dimensions. For example, let’s say your image size is 640x640. Then you would do the following:
176
+
177
+ ```python
178
+ # Convert absolute coordinates to relative coordinates
179
+ x1 = x1 / 640 # Divide x-coordinates by image width
180
+ x2 = x2 / 640
181
+ y1 = y1 / 640 # Divide y-coordinates by image height
182
+ y2 = y2 / 640
183
+ ```
184
+
185
+ - File Name: To obtain the file name of the image you're predicting on, access the image file path directly from the result object within your prediction loop.
186
+
187
+ #### Filtering Objects in YOLOv8 Predictions
188
+
189
+ **Issue**: Facing issues with how to filter and display only specific objects in the prediction results when running YOLOv8 using the Ultralytics library.
190
+
191
+ **Solution**: To detect specific classes use the classes argument to specify the classes you want to include in the output. For instance, to detect only cars (assuming 'cars' have class index 2):
192
+
193
+ ```shell
194
+ yolo task=detect mode=segment model=yolov8n-seg.pt source='path/to/car.mp4' show=True classes=2
195
+ ```
196
+
197
+ #### Understanding Precision Metrics in YOLOv8
198
+
199
+ **Issue**: Confusion regarding the difference between box precision, mask precision, and confusion matrix precision in YOLOv8.
200
+
201
+ **Solution**: Box precision measures the accuracy of predicted bounding boxes compared to the actual ground truth boxes using IoU (Intersection over Union) as the metric. Mask precision assesses the agreement between predicted segmentation masks and ground truth masks in pixel-wise object classification. Confusion matrix precision, on the other hand, focuses on overall classification accuracy across all classes and does not consider the geometric accuracy of predictions. It's important to note that a bounding box can be geometrically accurate (true positive) even if the class prediction is wrong, leading to differences between box precision and confusion matrix precision. These metrics evaluate distinct aspects of a model's performance, reflecting the need for different evaluation metrics in various tasks.
202
+
203
+ #### Extracting Object Dimensions in YOLOv8
204
+
205
+ **Issue**: Difficulty in retrieving the length and height of detected objects in YOLOv8, especially when multiple objects are detected in an image.
206
+
207
+ **Solution**: To retrieve the bounding box dimensions, first use the Ultralytics YOLOv8 model to predict objects in an image. Then, extract the width and height information of bounding boxes from the prediction results.
208
+
209
+ ```python
210
+ from ultralytics import YOLO
211
+
212
+ # Load a pre-trained YOLOv8 model
213
+ model = YOLO('yolov8n.pt')
214
+
215
+ # Specify the source image
216
+ source = 'https://ultralytics.com/images/bus.jpg'
217
+
218
+ # Make predictions
219
+ results = model.predict(source, save=True, imgsz=320, conf=0.5)
220
+
221
+ # Extract bounding box dimensions
222
+ boxes = results[0].boxes.xywh.cpu()
223
+ for box in boxes:
224
+ x, y, w, h = box
225
+ print(f"Width of Box: {w}, Height of Box: {h}")
226
+ ```
227
+
228
+ ### Deployment Challenges
229
+
230
+ #### GPU Deployment Issues
231
+
232
+ **Issue:** Deploying models in a multi-GPU environment can sometimes lead to unexpected behaviors like unexpected memory usage, inconsistent results across GPUs, etc.
233
+
234
+ **Solution:** Check for default GPU initialization. Some frameworks, like PyTorch, might initialize CUDA operations on a default GPU before transitioning to the designated GPUs. To bypass unexpected default initializations, specify the GPU directly during deployment and prediction. Then, use tools to monitor GPU utilization and memory usage to identify any anomalies in real-time. Also, ensure you're using the latest version of the framework or library.
235
+
236
+ #### Model Conversion/Exporting Issues
237
+
238
+ **Issue:** During the process of converting or exporting machine learning models to different formats or platforms, users might encounter errors or unexpected behaviors.
239
+
240
+ **Solution:**
241
+
242
+ - Compatibility Check: Ensure that you are using versions of libraries and frameworks that are compatible with each other. Mismatched versions can lead to unexpected errors during conversion.
243
+
244
+ - Environment Reset: If you're using an interactive environment like Jupyter or Colab, consider restarting your environment after making significant changes or installations. A fresh start can sometimes resolve underlying issues.
245
+
246
+ - Official Documentation: Always refer to the official documentation of the tool or library you are using for conversion. It often contains specific guidelines and best practices for model exporting.
247
+
248
+ - Community Support: Check the library or framework's official repository for similar issues reported by other users. The maintainers or community might have provided solutions or workarounds in discussion threads.
249
+
250
+ - Update Regularly: Ensure that you are using the latest version of the tool or library. Developers frequently release updates that fix known bugs or improve functionality.
251
+
252
+ - Test Incrementally: Before performing a full conversion, test the process with a smaller model or dataset to identify potential issues early on.
253
+
254
+ ## Community and Support
255
+
256
+ Engaging with a community of like-minded individuals can significantly enhance your experience and success in working with YOLOv8. Below are some channels and resources you may find helpful.
257
+
258
+ ### Forums and Channels for Getting Help
259
+
260
+ **GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it’s a great place to get help with specific problems.
261
+
262
+ **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and the developers.
263
+
264
+ ### Official Documentation and Resources
265
+
266
+ **Ultralytics YOLOv8 Docs**: The [official documentation](../index.md) provides a comprehensive overview of YOLOv8, along with guides on installation, usage, and troubleshooting.
267
+
268
+ These resources should provide a solid foundation for troubleshooting and improving your YOLOv8 projects, as well as connecting with others in the YOLOv8 community.
269
+
270
+ ## Conclusion
271
+
272
+ Troubleshooting is an integral part of any development process, and being equipped with the right knowledge can significantly reduce the time and effort spent in resolving issues. This guide aimed to address the most common challenges faced by users of the YOLOv8 model within the Ultralytics ecosystem. By understanding and addressing these common issues, you can ensure smoother project progress and achieve better results with your computer vision tasks.
273
+
274
+ Remember, the Ultralytics community is a valuable resource. Engaging with fellow developers and experts can provide additional insights and solutions that might not be covered in standard documentation. Always keep learning, experimenting, and sharing your experiences to contribute to the collective knowledge of the community.
275
+
276
+ Happy troubleshooting!
docs/en/guides/yolo-performance-metrics.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: A comprehensive guide on various performance metrics related to YOLOv8, their significance, and how to interpret them.
4
+ keywords: YOLOv8, Performance metrics, Object detection, Intersection over Union (IoU), Average Precision (AP), Mean Average Precision (mAP), Precision, Recall, Validation mode, Ultralytics
5
+ ---
6
+
7
+ # Performance Metrics Deep Dive
8
+
9
+ ## Introduction
10
+
11
+ Performance metrics are key tools to evaluate the accuracy and efficiency of object detection models. They shed light on how effectively a model can identify and localize objects within images. Additionally, they help in understanding the model's handling of false positives and false negatives. These insights are crucial for evaluating and enhancing the model's performance. In this guide, we will explore various performance metrics associated with YOLOv8, their significance, and how to interpret them.
12
+
13
+ <p align="center">
14
+ <br>
15
+ <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/q7LwPoM7tSQ"
16
+ title="YouTube video player" frameborder="0"
17
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
18
+ allowfullscreen>
19
+ </iframe>
20
+ <br>
21
+ <strong>Watch:</strong> Ultralytics YOLOv8 Performance Metrics | MAP, F1 Score, Precision, IoU & Accuracy
22
+ </p>
23
+
24
+ ## Object Detection Metrics
25
+
26
+ Let’s start by discussing some metrics that are not only important to YOLOv8 but are broadly applicable across different object detection models.
27
+
28
+ - **Intersection over Union (IoU):** IoU is a measure that quantifies the overlap between a predicted bounding box and a ground truth bounding box. It plays a fundamental role in evaluating the accuracy of object localization.
29
+
30
+ - **Average Precision (AP):** AP computes the area under the precision-recall curve, providing a single value that encapsulates the model's precision and recall performance.
31
+
32
+ - **Mean Average Precision (mAP):** mAP extends the concept of AP by calculating the average AP values across multiple object classes. This is useful in multi-class object detection scenarios to provide a comprehensive evaluation of the model's performance.
33
+
34
+ - **Precision and Recall:** Precision quantifies the proportion of true positives among all positive predictions, assessing the model's capability to avoid false positives. On the other hand, Recall calculates the proportion of true positives among all actual positives, measuring the model's ability to detect all instances of a class.
35
+
36
+ - **F1 Score:** The F1 Score is the harmonic mean of precision and recall, providing a balanced assessment of a model's performance while considering both false positives and false negatives.
37
+
38
+ ## How to Calculate Metrics for YOLOv8 Model
39
+
40
+ Now, we can explore [YOLOv8's Validation mode](../modes/val.md) that can be used to compute the above discussed evaluation metrics.
41
+
42
+ Using the validation mode is simple. Once you have a trained model, you can invoke the model.val() function. This function will then process the validation dataset and return a variety of performance metrics. But what do these metrics mean? And how should you interpret them?
43
+
44
+ ### Interpreting the Output
45
+
46
+ Let's break down the output of the model.val() function and understand each segment of the output.
47
+
48
+ #### Class-wise Metrics
49
+
50
+ One of the sections of the output is the class-wise breakdown of performance metrics. This granular information is useful when you are trying to understand how well the model is doing for each specific class, especially in datasets with a diverse range of object categories. For each class in the dataset the following is provided:
51
+
52
+ - **Class**: This denotes the name of the object class, such as "person", "car", or "dog".
53
+
54
+ - **Images**: This metric tells you the number of images in the validation set that contain the object class.
55
+
56
+ - **Instances**: This provides the count of how many times the class appears across all images in the validation set.
57
+
58
+ - **Box(P, R, mAP50, mAP50-95)**: This metric provides insights into the model's performance in detecting objects:
59
+
60
+ - **P (Precision)**: The accuracy of the detected objects, indicating how many detections were correct.
61
+
62
+ - **R (Recall)**: The ability of the model to identify all instances of objects in the images.
63
+
64
+ - **mAP50**: Mean average precision calculated at an intersection over union (IoU) threshold of 0.50. It's a measure of the model's accuracy considering only the "easy" detections.
65
+
66
+ - **mAP50-95**: The average of the mean average precision calculated at varying IoU thresholds, ranging from 0.50 to 0.95. It gives a comprehensive view of the model's performance across different levels of detection difficulty.
67
+
68
+ #### Speed Metrics
69
+
70
+ The speed of inference can be as critical as accuracy, especially in real-time object detection scenarios. This section breaks down the time taken for various stages of the validation process, from preprocessing to post-processing.
71
+
72
+ #### COCO Metrics Evaluation
73
+
74
+ For users validating on the COCO dataset, additional metrics are calculated using the COCO evaluation script. These metrics give insights into precision and recall at different IoU thresholds and for objects of different sizes.
75
+
76
+ #### Visual Outputs
77
+
78
+ The model.val() function, apart from producing numeric metrics, also yields visual outputs that can provide a more intuitive understanding of the model's performance. Here's a breakdown of the visual outputs you can expect:
79
+
80
+ - **F1 Score Curve (`F1_curve.png`)**: This curve represents the F1 score across various thresholds. Interpreting this curve can offer insights into the model's balance between false positives and false negatives over different thresholds.
81
+
82
+ - **Precision-Recall Curve (`PR_curve.png`)**: An integral visualization for any classification problem, this curve showcases the trade-offs between precision and recall at varied thresholds. It becomes especially significant when dealing with imbalanced classes.
83
+
84
+ - **Precision Curve (`P_curve.png`)**: A graphical representation of precision values at different thresholds. This curve helps in understanding how precision varies as the threshold changes.
85
+
86
+ - **Recall Curve (`R_curve.png`)**: Correspondingly, this graph illustrates how the recall values change across different thresholds.
87
+
88
+ - **Confusion Matrix (`confusion_matrix.png`)**: The confusion matrix provides a detailed view of the outcomes, showcasing the counts of true positives, true negatives, false positives, and false negatives for each class.
89
+
90
+ - **Normalized Confusion Matrix (`confusion_matrix_normalized.png`)**: This visualization is a normalized version of the confusion matrix. It represents the data in proportions rather than raw counts. This format makes it simpler to compare the performance across classes.
91
+
92
+ - **Validation Batch Labels (`val_batchX_labels.jpg`)**: These images depict the ground truth labels for distinct batches from the validation dataset. They provide a clear picture of what the objects are and their respective locations as per the dataset.
93
+
94
+ - **Validation Batch Predictions (`val_batchX_pred.jpg`)**: Contrasting the label images, these visuals display the predictions made by the YOLOv8 model for the respective batches. By comparing these to the label images, you can easily assess how well the model detects and classifies objects visually.
95
+
96
+ #### Results Storage
97
+
98
+ For future reference, the results are saved to a directory, typically named runs/detect/val.
99
+
100
+ ## Choosing the Right Metrics
101
+
102
+ Choosing the right metrics to evaluate often depends on the specific application.
103
+
104
+ - **mAP:** Suitable for a broad assessment of model performance.
105
+
106
+ - **IoU:** Essential when precise object location is crucial.
107
+
108
+ - **Precision:** Important when minimizing false detections is a priority.
109
+
110
+ - **Recall:** Vital when it's important to detect every instance of an object.
111
+
112
+ - **F1 Score:** Useful when a balance between precision and recall is needed.
113
+
114
+ For real-time applications, speed metrics like FPS (Frames Per Second) and latency are crucial to ensure timely results.
115
+
116
+ ## Interpretation of Results
117
+
118
+ It’s important to understand the metrics. Here's what some of the commonly observed lower scores might suggest:
119
+
120
+ - **Low mAP:** Indicates the model may need general refinements.
121
+
122
+ - **Low IoU:** The model might be struggling to pinpoint objects accurately. Different bounding box methods could help.
123
+
124
+ - **Low Precision:** The model may be detecting too many non-existent objects. Adjusting confidence thresholds might reduce this.
125
+
126
+ - **Low Recall:** The model could be missing real objects. Improving feature extraction or using more data might help.
127
+
128
+ - **Imbalanced F1 Score:** There's a disparity between precision and recall.
129
+
130
+ - **Class-specific AP:** Low scores here can highlight classes the model struggles with.
131
+
132
+ ## Case Studies
133
+
134
+ Real-world examples can help clarify how these metrics work in practice.
135
+
136
+ ### Case 1
137
+
138
+ - **Situation:** mAP and F1 Score are suboptimal, but while Recall is good, Precision isn't.
139
+
140
+ - **Interpretation & Action:** There might be too many incorrect detections. Tightening confidence thresholds could reduce these, though it might also slightly decrease recall.
141
+
142
+ ### Case 2
143
+
144
+ - **Situation:** mAP and Recall are acceptable, but IoU is lacking.
145
+
146
+ - **Interpretation & Action:** The model detects objects well but might not be localizing them precisely. Refining bounding box predictions might help.
147
+
148
+ ### Case 3
149
+
150
+ - **Situation:** Some classes have a much lower AP than others, even with a decent overall mAP.
151
+
152
+ - **Interpretation & Action:** These classes might be more challenging for the model. Using more data for these classes or adjusting class weights during training could be beneficial.
153
+
154
+ ## Connect and Collaborate
155
+
156
+ Tapping into a community of enthusiasts and experts can amplify your journey with YOLOv8. Here are some avenues that can facilitate learning, troubleshooting, and networking.
157
+
158
+ ### Engage with the Broader Community
159
+
160
+ - **GitHub Issues:** The YOLOv8 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it’s a great place to get help with specific problems.
161
+
162
+ - **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://ultralytics.com/discord/) where you can interact with other users and the developers.
163
+
164
+ ### Official Documentation and Resources:
165
+
166
+ - **Ultralytics YOLOv8 Docs:** The [official documentation](../index.md) provides a comprehensive overview of YOLOv8, along with guides on installation, usage, and troubleshooting.
167
+
168
+ Using these resources will not only guide you through any challenges but also keep you updated with the latest trends and best practices in the YOLOv8 community.
169
+
170
+ ## Conclusion
171
+
172
+ In this guide, we've taken a close look at the essential performance metrics for YOLOv8. These metrics are key to understanding how well a model is performing and are vital for anyone aiming to fine-tune their models. They offer the necessary insights for improvements and to make sure the model works effectively in real-life situations.
173
+
174
+ Remember, the YOLOv8 and Ultralytics community is an invaluable asset. Engaging with fellow developers and experts can open doors to insights and solutions not found in standard documentation. As you journey through object detection, keep the spirit of learning alive, experiment with new strategies, and share your findings. By doing so, you contribute to the community's collective wisdom and ensure its growth.
175
+
176
+ Happy object detecting!
docs/en/guides/yolo-thread-safe-inference.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: This guide provides best practices for performing thread-safe inference with YOLO models, ensuring reliable and concurrent predictions in multi-threaded applications.
4
+ keywords: thread-safe, YOLO inference, multi-threading, concurrent predictions, YOLO models, Ultralytics, Python threading, safe YOLO usage, AI concurrency
5
+ ---
6
+
7
+ # Thread-Safe Inference with YOLO Models
8
+
9
+ Running YOLO models in a multi-threaded environment requires careful consideration to ensure thread safety. Python's `threading` module allows you to run several threads concurrently, but when it comes to using YOLO models across these threads, there are important safety issues to be aware of. This page will guide you through creating thread-safe YOLO model inference.
10
+
11
+ ## Understanding Python Threading
12
+
13
+ Python threads are a form of parallelism that allow your program to run multiple operations at once. However, Python's Global Interpreter Lock (GIL) means that only one thread can execute Python bytecode at a time.
14
+
15
+ <p align="center">
16
+ <img width="800" src="https://user-images.githubusercontent.com/26833433/281418476-7f478570-fd77-4a40-bf3d-74b4db4d668c.png" alt="Single vs Multi-Thread Examples">
17
+ </p>
18
+
19
+ While this sounds like a limitation, threads can still provide concurrency, especially for I/O-bound operations or when using operations that release the GIL, like those performed by YOLO's underlying C libraries.
20
+
21
+ ## The Danger of Shared Model Instances
22
+
23
+ Instantiating a YOLO model outside your threads and sharing this instance across multiple threads can lead to race conditions, where the internal state of the model is inconsistently modified due to concurrent accesses. This is particularly problematic when the model or its components hold state that is not designed to be thread-safe.
24
+
25
+ ### Non-Thread-Safe Example: Single Model Instance
26
+
27
+ When using threads in Python, it's important to recognize patterns that can lead to concurrency issues. Here is what you should avoid: sharing a single YOLO model instance across multiple threads.
28
+
29
+ ```python
30
+ # Unsafe: Sharing a single model instance across threads
31
+ from ultralytics import YOLO
32
+ from threading import Thread
33
+
34
+ # Instantiate the model outside the thread
35
+ shared_model = YOLO("yolov8n.pt")
36
+
37
+
38
+ def predict(image_path):
39
+ results = shared_model.predict(image_path)
40
+ # Process results
41
+
42
+
43
+ # Starting threads that share the same model instance
44
+ Thread(target=predict, args=("image1.jpg",)).start()
45
+ Thread(target=predict, args=("image2.jpg",)).start()
46
+ ```
47
+
48
+ In the example above, the `shared_model` is used by multiple threads, which can lead to unpredictable results because `predict` could be executed simultaneously by multiple threads.
49
+
50
+ ### Non-Thread-Safe Example: Multiple Model Instances
51
+
52
+ Similarly, here is an unsafe pattern with multiple YOLO model instances:
53
+
54
+ ```python
55
+ # Unsafe: Sharing multiple model instances across threads can still lead to issues
56
+ from ultralytics import YOLO
57
+ from threading import Thread
58
+
59
+ # Instantiate multiple models outside the thread
60
+ shared_model_1 = YOLO("yolov8n_1.pt")
61
+ shared_model_2 = YOLO("yolov8n_2.pt")
62
+
63
+
64
+ def predict(model, image_path):
65
+ results = model.predict(image_path)
66
+ # Process results
67
+
68
+
69
+ # Starting threads with individual model instances
70
+ Thread(target=predict, args=(shared_model_1, "image1.jpg")).start()
71
+ Thread(target=predict, args=(shared_model_2, "image2.jpg")).start()
72
+ ```
73
+
74
+ Even though there are two separate model instances, the risk of concurrency issues still exists. If the internal implementation of `YOLO` is not thread-safe, using separate instances might not prevent race conditions, especially if these instances share any underlying resources or states that are not thread-local.
75
+
76
+ ## Thread-Safe Inference
77
+
78
+ To perform thread-safe inference, you should instantiate a separate YOLO model within each thread. This ensures that each thread has its own isolated model instance, eliminating the risk of race conditions.
79
+
80
+ ### Thread-Safe Example
81
+
82
+ Here's how to instantiate a YOLO model inside each thread for safe parallel inference:
83
+
84
+ ```python
85
+ # Safe: Instantiating a single model inside each thread
86
+ from ultralytics import YOLO
87
+ from threading import Thread
88
+
89
+
90
+ def thread_safe_predict(image_path):
91
+ # Instantiate a new model inside the thread
92
+ local_model = YOLO("yolov8n.pt")
93
+ results = local_model.predict(image_path)
94
+ # Process results
95
+
96
+
97
+ # Starting threads that each have their own model instance
98
+ Thread(target=thread_safe_predict, args=("image1.jpg",)).start()
99
+ Thread(target=thread_safe_predict, args=("image2.jpg",)).start()
100
+ ```
101
+
102
+ In this example, each thread creates its own `YOLO` instance. This prevents any thread from interfering with the model state of another, thus ensuring that each thread performs inference safely and without unexpected interactions with the other threads.
103
+
104
+ ## Conclusion
105
+
106
+ When using YOLO models with Python's `threading`, always instantiate your models within the thread that will use them to ensure thread safety. This practice avoids race conditions and makes sure that your inference tasks run reliably.
107
+
108
+ For more advanced scenarios and to further optimize your multi-threaded inference performance, consider using process-based parallelism with `multiprocessing` or leveraging a task queue with dedicated worker processes.
docs/en/help/CI.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Learn how Ultralytics leverages Continuous Integration (CI) for maintaining high-quality code. Explore our CI tests and the status of these tests for our repositories.
4
+ keywords: continuous integration, software development, CI tests, Ultralytics repositories, high-quality code, Docker Deployment, Broken Links, CodeQL, PyPi Publishing
5
+ ---
6
+
7
+ # Continuous Integration (CI)
8
+
9
+ Continuous Integration (CI) is an essential aspect of software development which involves integrating changes and testing them automatically. CI allows us to maintain high-quality code by catching issues early and often in the development process. At Ultralytics, we use various CI tests to ensure the quality and integrity of our codebase.
10
+
11
+ ## CI Actions
12
+
13
+ Here's a brief description of our CI actions:
14
+
15
+ - **[CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml):** This is our primary CI test that involves running unit tests, linting checks, and sometimes more comprehensive tests depending on the repository.
16
+ - **[Docker Deployment](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml):** This test checks the deployment of the project using Docker to ensure the Dockerfile and related scripts are working correctly.
17
+ - **[Broken Links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml):** This test scans the codebase for any broken or dead links in our markdown or HTML files.
18
+ - **[CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml):** CodeQL is a tool from GitHub that performs semantic analysis on our code, helping to find potential security vulnerabilities and maintain high-quality code.
19
+ - **[PyPi Publishing](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml):** This test checks if the project can be packaged and published to PyPi without any errors.
20
+
21
+ ### CI Results
22
+
23
+ Below is the table showing the status of these CI tests for our main repositories:
24
+
25
+ | Repository | CI | Docker Deployment | Broken Links | CodeQL | PyPi and Docs Publishing |
26
+ |-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
27
+ | [yolov3](https://github.com/ultralytics/yolov3) | [![YOLOv3 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov3/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov3/actions/workflows/codeql-analysis.yml) | |
28
+ | [yolov5](https://github.com/ultralytics/yolov5) | [![YOLOv5 CI](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml) | [![Publish Docker Images](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/docker.yml) | [![Check Broken links](https://github.com/ultralytics/yolov5/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ultralytics/yolov5/actions/workflows/codeql-analysis.yml) | |
29
+ | [ultralytics](https://github.com/ultralytics/ultralytics) | [![ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml) | [![Publish Docker Images](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/docker.yaml) | [![Check Broken links](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/links.yml) | [![CodeQL](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/codeql.yaml) | [![Publish to PyPI and Deploy Docs](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/publish.yml) |
30
+ | [hub](https://github.com/ultralytics/hub) | [![HUB CI](https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/hub/actions/workflows/ci.yaml) | | [![Check Broken links](https://github.com/ultralytics/hub/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/hub/actions/workflows/links.yml) | | |
31
+ | [docs](https://github.com/ultralytics/docs) | | | [![Check Broken links](https://github.com/ultralytics/docs/actions/workflows/links.yml/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/links.yml)[![Check Domains](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/check_domains.yml) | | [![pages-build-deployment](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment/badge.svg)](https://github.com/ultralytics/docs/actions/workflows/pages/pages-build-deployment) |
32
+
33
+ Each badge shows the status of the last run of the corresponding CI test on the `main` branch of the respective repository. If a test fails, the badge will display a "failing" status, and if it passes, it will display a "passing" status.
34
+
35
+ If you notice a test failing, it would be a great help if you could report it through a GitHub issue in the respective repository.
36
+
37
+ Remember, a successful CI test does not mean that everything is perfect. It is always recommended to manually review the code before deployment or merging changes.
38
+
39
+ ## Code Coverage
40
+
41
+ Code coverage is a metric that represents the percentage of your codebase that is executed when your tests run. It provides insight into how well your tests exercise your code and can be crucial in identifying untested parts of your application. A high code coverage percentage is often associated with a lower likelihood of bugs. However, it's essential to understand that code coverage does not guarantee the absence of defects. It merely indicates which parts of the code have been executed by the tests.
42
+
43
+ ### Integration with [codecov.io](https://codecov.io/)
44
+
45
+ At Ultralytics, we have integrated our repositories with [codecov.io](https://codecov.io/), a popular online platform for measuring and visualizing code coverage. Codecov provides detailed insights, coverage comparisons between commits, and visual overlays directly on your code, indicating which lines were covered.
46
+
47
+ By integrating with Codecov, we aim to maintain and improve the quality of our code by focusing on areas that might be prone to errors or need further testing.
48
+
49
+ ### Coverage Results
50
+
51
+ To quickly get a glimpse of the code coverage status of the `ultralytics` python package, we have included a badge and sunburst visual of the `ultralytics` coverage results. These images show the percentage of code covered by our tests, offering an at-a-glance metric of our testing efforts. For full details please see https://codecov.io/github/ultralytics/ultralytics.
52
+
53
+ | Repository | Code Coverage |
54
+ |-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
55
+ | [ultralytics](https://github.com/ultralytics/ultralytics) | [![codecov](https://codecov.io/gh/ultralytics/ultralytics/branch/main/graph/badge.svg?token=HHW7IIVFVY)](https://codecov.io/gh/ultralytics/ultralytics) |
56
+
57
+ In the sunburst graphic below, the innermost circle is the entire project, moving away from the center are folders then, finally, a single file. The size and color of each slice is representing the number of statements and the coverage, respectively.
58
+
59
+ <a href="https://codecov.io/github/ultralytics/ultralytics">
60
+ <img src="https://codecov.io/gh/ultralytics/ultralytics/branch/main/graphs/sunburst.svg?token=HHW7IIVFVY" alt="Ultralytics Codecov Image">
61
+ </a>
docs/en/help/CLA.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ description: Understand terms governing contributions to Ultralytics projects including source code, bug fixes, documentation and more. Read our Contributor License Agreement.
3
+ keywords: Ultralytics, Contributor License Agreement, Open Source Software, Contributions, Copyright License, Patent License, Moral Rights
4
+ ---
5
+
6
+ # Ultralytics Individual Contributor License Agreement
7
+
8
+ Thank you for your interest in contributing to open source software projects (“Projects”) made available by Ultralytics Inc. (“Ultralytics”). This Individual Contributor License Agreement (“Agreement”) sets out the terms governing any source code, object code, bug fixes, configuration changes, tools, specifications, documentation, data, materials, feedback, information or other works of authorship that you submit or have submitted, in any form and in any manner, to Ultralytics in respect of any Projects (collectively “Contributions”). If you have any questions respecting this Agreement, please contact hello@ultralytics.com.
9
+
10
+ You agree that the following terms apply to all of your past, present and future Contributions. Except for the licenses granted in this Agreement, you retain all of your right, title and interest in and to your Contributions.
11
+
12
+ **Copyright License.** You hereby grant, and agree to grant, to Ultralytics a non-exclusive, perpetual, irrevocable, worldwide, fully-paid, royalty-free, transferable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute your Contributions and such derivative works, with the right to sublicense the foregoing rights through multiple tiers of sublicensees.
13
+
14
+ **Patent License.** You hereby grant, and agree to grant, to Ultralytics a non-exclusive, perpetual, irrevocable, worldwide, fully-paid, royalty-free, transferable patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer your Contributions, where such license applies only to those patent claims licensable by you that are necessarily infringed by your Contributions alone or by combination of your Contributions with the Project to which such Contributions were submitted, with the right to sublicense the foregoing rights through multiple tiers of sublicensees.
15
+
16
+ **Moral Rights.** To the fullest extent permitted under applicable law, you hereby waive, and agree not to assert, all of your “moral rights” in or relating to your Contributions for the benefit of Ultralytics, its assigns, and their respective direct and indirect sublicensees.
17
+
18
+ **Third Party Content/Rights.** If your Contribution includes or is based on any source code, object code, bug fixes, configuration changes, tools, specifications, documentation, data, materials, feedback, information or other works of authorship that were not authored by you (“Third Party Content”) or if you are aware of any third party intellectual property or proprietary rights associated with your Contribution (“Third Party Rights”), then you agree to include with the submission of your Contribution full details respecting such Third Party Content and Third Party Rights, including, without limitation, identification of which aspects of your Contribution contain Third Party Content or are associated with Third Party Rights, the owner/author of the Third Party Content and Third Party Rights, where you obtained the Third Party Content, and any applicable third party license terms or restrictions respecting the Third Party Content and Third Party Rights. For greater certainty, the foregoing obligations respecting the identification of Third Party Content and Third Party Rights do not apply to any portion of a Project that is incorporated into your Contribution to that same Project.
19
+
20
+ **Representations.** You represent that, other than the Third Party Content and Third Party Rights identified by you in accordance with this Agreement, you are the sole author of your Contributions and are legally entitled to grant the foregoing licenses and waivers in respect of your Contributions. If your Contributions were created in the course of your employment with your past or present employer(s), you represent that such employer(s) has authorized you to make your Contributions on behalf of such employer(s) or such employer(s) has waived all of their right, title or interest in or to your Contributions.
21
+
22
+ **Disclaimer.** To the fullest extent permitted under applicable law, your Contributions are provided on an "asis" basis, without any warranties or conditions, express or implied, including, without limitation, any implied warranties or conditions of non-infringement, merchantability or fitness for a particular purpose. You are not required to provide support for your Contributions, except to the extent you desire to provide support.
23
+
24
+ **No Obligation.** You acknowledge that Ultralytics is under no obligation to use or incorporate your Contributions into any of the Projects. The decision to use or incorporate your Contributions into any of the Projects will be made at the sole discretion of Ultralytics or its authorized delegates.
25
+
26
+ **Disputes.** This Agreement shall be governed by and construed in accordance with the laws of the State of New York, United States of America, without giving effect to its principles or rules regarding conflicts of laws, other than such principles directing application of New York law. The parties hereby submit to venue in, and jurisdiction of the courts located in New York, New York for purposes relating to this Agreement. In the event that any of the provisions of this Agreement shall be held by a court or other tribunal of competent jurisdiction to be unenforceable, the remaining portions hereof shall remain in full force and effect.
27
+
28
+ **Assignment.** You agree that Ultralytics may assign this Agreement, and all of its rights, obligations and licenses hereunder.
docs/en/help/FAQ.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Find solutions to your common Ultralytics YOLO related queries. Learn about hardware requirements, fine-tuning YOLO models, conversion to ONNX/TensorFlow, and more.
4
+ keywords: Ultralytics, YOLO, FAQ, hardware requirements, ONNX, TensorFlow, real-time detection, YOLO accuracy
5
+ ---
6
+
7
+ # Ultralytics YOLO Frequently Asked Questions (FAQ)
8
+
9
+ This FAQ section addresses some common questions and issues users might encounter while working with Ultralytics YOLO repositories.
10
+
11
+ ## 1. What are the hardware requirements for running Ultralytics YOLO?
12
+
13
+ Ultralytics YOLO can be run on a variety of hardware configurations, including CPUs, GPUs, and even some edge devices. However, for optimal performance and faster training and inference, we recommend using a GPU with a minimum of 8GB of memory. NVIDIA GPUs with CUDA support are ideal for this purpose.
14
+
15
+ ## 2. How do I fine-tune a pre-trained YOLO model on my custom dataset?
16
+
17
+ To fine-tune a pre-trained YOLO model on your custom dataset, you'll need to create a dataset configuration file (YAML) that defines the dataset's properties, such as the path to the images, the number of classes, and class names. Next, you'll need to modify the model configuration file to match the number of classes in your dataset. Finally, use the `train.py` script to start the training process with your custom dataset and the pre-trained model. You can find a detailed guide on fine-tuning YOLO in the Ultralytics documentation.
18
+
19
+ ## 3. How do I convert a YOLO model to ONNX or TensorFlow format?
20
+
21
+ Ultralytics provides built-in support for converting YOLO models to ONNX format. You can use the `export.py` script to convert a saved model to ONNX format. If you need to convert the model to TensorFlow format, you can use the ONNX model as an intermediary and then use the ONNX-TensorFlow converter to convert the ONNX model to TensorFlow format.
22
+
23
+ ## 4. Can I use Ultralytics YOLO for real-time object detection?
24
+
25
+ Yes, Ultralytics YOLO is designed to be efficient and fast, making it suitable for real-time object detection tasks. The actual performance will depend on your hardware configuration and the complexity of the model. Using a GPU and optimizing the model for your specific use case can help achieve real-time performance.
26
+
27
+ ## 5. How can I improve the accuracy of my YOLO model?
28
+
29
+ Improving the accuracy of a YOLO model may involve several strategies, such as:
30
+
31
+ - Fine-tuning the model on more annotated data
32
+ - Data augmentation to increase the variety of training samples
33
+ - Using a larger or more complex model architecture
34
+ - Adjusting the learning rate, batch size, and other hyperparameters
35
+ - Using techniques like transfer learning or knowledge distillation
36
+
37
+ Remember that there's often a trade-off between accuracy and inference speed, so finding the right balance is crucial for your specific application.
38
+
39
+ If you have any more questions or need assistance, don't hesitate to consult the Ultralytics documentation or reach out to the community through GitHub Issues or the official discussion forum.
docs/en/help/code_of_conduct.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Explore Ultralytics community’s Code of Conduct, ensuring a supportive, inclusive environment for contributors & members at all levels. Find our guidelines on acceptable behavior & enforcement.
4
+ keywords: Ultralytics, code of conduct, community, contribution, behavior guidelines, enforcement, open source contributions
5
+ ---
6
+
7
+ # Ultralytics Contributor Covenant Code of Conduct
8
+
9
+ ## Our Pledge
10
+
11
+ We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
12
+
13
+ We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our community include:
18
+
19
+ - Demonstrating empathy and kindness toward other people
20
+ - Being respectful of differing opinions, viewpoints, and experiences
21
+ - Giving and gracefully accepting constructive feedback
22
+ - Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
23
+ - Focusing on what is best not just for us as individuals, but for the overall community
24
+
25
+ Examples of unacceptable behavior include:
26
+
27
+ - The use of sexualized language or imagery, and sexual attention or advances of any kind
28
+ - Trolling, insulting or derogatory comments, and personal or political attacks
29
+ - Public or private harassment
30
+ - Publishing others' private information, such as a physical or email address, without their explicit permission
31
+ - Other conduct which could reasonably be considered inappropriate in a professional setting
32
+
33
+ ## Enforcement Responsibilities
34
+
35
+ Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
36
+
37
+ Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
38
+
39
+ ## Scope
40
+
41
+ This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
42
+
43
+ ## Enforcement
44
+
45
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at hello@ultralytics.com. All complaints will be reviewed and investigated promptly and fairly.
46
+
47
+ All community leaders are obligated to respect the privacy and security of the reporter of any incident.
48
+
49
+ ## Enforcement Guidelines
50
+
51
+ Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
52
+
53
+ ### 1. Correction
54
+
55
+ **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
56
+
57
+ **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
58
+
59
+ ### 2. Warning
60
+
61
+ **Community Impact**: A violation through a single incident or series of actions.
62
+
63
+ **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
64
+
65
+ ### 3. Temporary Ban
66
+
67
+ **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
68
+
69
+ **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
70
+
71
+ ### 4. Permanent Ban
72
+
73
+ **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
74
+
75
+ **Consequence**: A permanent ban from any sort of public interaction within the community.
76
+
77
+ ## Attribution
78
+
79
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
80
+
81
+ Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
82
+
83
+ For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
84
+
85
+ [homepage]: https://www.contributor-covenant.org
docs/en/help/contributing.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ comments: true
3
+ description: Learn how to contribute to Ultralytics YOLO projects – guidelines for pull requests, reporting bugs, code conduct and CLA signing.
4
+ keywords: Ultralytics, YOLO, open-source, contribute, pull request, bug report, coding guidelines, CLA, code of conduct, GitHub
5
+ ---
6
+
7
+ # Contributing to Ultralytics Open-Source YOLO Repositories
8
+
9
+ First of all, thank you for your interest in contributing to Ultralytics open-source YOLO repositories! Your contributions will help improve the project and benefit the community. This document provides guidelines and best practices to get you started.
10
+
11
+ ## Table of Contents
12
+
13
+ 1. [Code of Conduct](#code-of-conduct)
14
+ 2. [Contributing via Pull Requests](#contributing-via-pull-requests)
15
+ - [CLA Signing](#cla-signing)
16
+ - [Google-Style Docstrings](#google-style-docstrings)
17
+ - [GitHub Actions CI Tests](#github-actions-ci-tests)
18
+ 3. [Reporting Bugs](#reporting-bugs)
19
+ 4. [License](#license)
20
+ 5. [Conclusion](#conclusion)
21
+
22
+ ## Code of Conduct
23
+
24
+ All contributors are expected to adhere to the [Code of Conduct](code_of_conduct.md) to ensure a welcoming and inclusive environment for everyone.
25
+
26
+ ## Contributing via Pull Requests
27
+
28
+ We welcome contributions in the form of pull requests. To make the review process smoother, please follow these guidelines:
29
+
30
+ 1. **[Fork the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo)**: Fork the Ultralytics YOLO repository to your own GitHub account.
31
+
32
+ 2. **[Create a branch](https://docs.github.com/en/desktop/making-changes-in-a-branch/managing-branches-in-github-desktop)**: Create a new branch in your forked repository with a descriptive name for your changes.
33
+
34
+ 3. **Make your changes**: Make the changes you want to contribute. Ensure that your changes follow the coding style of the project and do not introduce new errors or warnings.
35
+
36
+ 4. **[Test your changes](https://github.com/ultralytics/ultralytics/tree/main/tests)**: Test your changes locally to ensure that they work as expected and do not introduce new issues.
37
+
38
+ 5. **[Commit your changes](https://docs.github.com/en/desktop/making-changes-in-a-branch/committing-and-reviewing-changes-to-your-project-in-github-desktop)**: Commit your changes with a descriptive commit message. Make sure to include any relevant issue numbers in your commit message.
39
+
40
+ 6. **[Create a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)**: Create a pull request from your forked repository to the main Ultralytics YOLO repository. In the pull request description, provide a clear explanation of your changes and how they improve the project.
41
+
42
+ ### CLA Signing
43
+
44
+ Before we can accept your pull request, you need to sign a [Contributor License Agreement (CLA)](CLA.md). This is a legal document stating that you agree to the terms of contributing to the Ultralytics YOLO repositories. The CLA ensures that your contributions are properly licensed and that the project can continue to be distributed under the AGPL-3.0 license.
45
+
46
+ To sign the CLA, follow the instructions provided by the CLA bot after you submit your PR and add a comment in your PR saying:
47
+
48
+ ```
49
+ I have read the CLA Document and I sign the CLA
50
+ ```
51
+
52
+ ### Google-Style Docstrings
53
+
54
+ When adding new functions or classes, please include a [Google-style docstring](https://google.github.io/styleguide/pyguide.html) to provide clear and concise documentation for other developers. This will help ensure that your contributions are easy to understand and maintain.
55
+
56
+ !!! Example "Example Docstrings"
57
+
58
+ === "Google-style"
59
+
60
+ This example shows both Google-style docstrings. Note that both input and output `types` must always be enclosed by parentheses, i.e. `(bool)`.
61
+ ```python
62
+ def example_function(arg1, arg2=4):
63
+ """
64
+ Example function that demonstrates Google-style docstrings.
65
+
66
+ Args:
67
+ arg1 (int): The first argument.
68
+ arg2 (int): The second argument. Default value is 4.
69
+
70
+ Returns:
71
+ (bool): True if successful, False otherwise.
72
+
73
+ Examples:
74
+ >>> result = example_function(1, 2) # returns False
75
+ """
76
+ if arg1 == arg2:
77
+ return True
78
+ return False
79
+ ```
80
+
81
+ === "Google-style with type hints"
82
+
83
+ This example shows both Google-style docstrings and argument and return type hints, though both are not required, one can be used without the other.
84
+ ```python
85
+ def example_function(arg1: int, arg2: int = 4) -> bool:
86
+ """
87
+ Example function that demonstrates Google-style docstrings.
88
+
89
+ Args:
90
+ arg1: The first argument.
91
+ arg2: The second argument. Default value is 4.
92
+
93
+ Returns:
94
+ True if successful, False otherwise.
95
+
96
+ Examples:
97
+ >>> result = example_function(1, 2) # returns False
98
+ """
99
+ if arg1 == arg2:
100
+ return True
101
+ return False
102
+ ```
103
+
104
+ === "Single-line"
105
+
106
+ Smaller or simpler functions can utilize a single-line docstring. Note the docstring must use 3 double-quotes, and be a complete sentence starting with a capital letter and ending with a period.
107
+ ```python
108
+ def example_small_function(arg1: int, arg2: int = 4) -> bool:
109
+ """Example function that demonstrates a single-line docstring."""
110
+ return arg1 == arg2
111
+ ```
112
+
113
+ ### GitHub Actions CI Tests
114
+
115
+ Before your pull request can be merged, all GitHub Actions [Continuous Integration](CI.md) (CI) tests must pass. These tests include linting, unit tests, and other checks to ensure that your changes meet the quality standards of the project. Make sure to review the output of the GitHub Actions and fix any issues
116
+
117
+ ## Reporting Bugs
118
+
119
+ We appreciate bug reports as they play a crucial role in maintaining the project's quality. When reporting bugs it is important to provide a [Minimum Reproducible Example](minimum_reproducible_example.md): a clear, concise code example that replicates the issue. This helps in quick identification and resolution of the bug.
120
+
121
+ ## License
122
+
123
+ Ultralytics embraces the [GNU Affero General Public License v3.0 (AGPL-3.0)](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) for its repositories, promoting openness, transparency, and collaborative enhancement in software development. This strong copyleft license ensures that all users and developers retain the freedom to use, modify, and share the software. It fosters community collaboration, ensuring that any improvements remain accessible to all.
124
+
125
+ Users and developers are encouraged to familiarize themselves with the terms of AGPL-3.0 to contribute effectively and ethically to the Ultralytics open-source community.
126
+
127
+ ## Conclusion
128
+
129
+ Thank you for your interest in contributing to [Ultralytics open-source](https://github.com/ultralytics) YOLO projects. Your participation is crucial in shaping the future of our software and fostering a community of innovation and collaboration. Whether you're improving code, reporting bugs, or suggesting features, your contributions make a significant impact.
130
+
131
+ We're eager to see your ideas in action and appreciate your commitment to advancing object detection technology. Let's continue to grow and innovate together in this exciting open-source journey. Happy coding! 🚀🌟