File size: 4,279 Bytes
6bd3c11
 
 
c01f716
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---

license: apache-2.0
---

Cookiecutter-MLOps
==============================

A cookiecutter template employing MLOps best practices, so you can focus on building machine learning products while
having MLOps best practices applied.

Instructions
------------
1. Clone the repo.
2. Run `make dirs` to create the missing parts of the directory structure described below.
3. *Optional:* Run `make virtualenv` to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run `source env/bin/activate` to activate the virtualenv.
4. Run `make requirements` to install required python packages.
5. Put the raw data in `data/raw`.
6. To save the raw data to the DVC cache, run `dvc add data/raw`
7. Edit the code files to your heart's desire.
8. Process your data, train and evaluate your model using `dvc repro` or `make reproduce`
9. To run the pre-commit hooks, run `make pre-commit-install`
10. For setting up data validation tests, run `make setup-setup-data-validation`
11. For **running** the data validation tests, run `make run-data-validation`
12. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization
------------

    β”œβ”€β”€ LICENSE
    β”œβ”€β”€ Makefile           <- Makefile with commands like `make dirs` or `make clean`
    β”œβ”€β”€ README.md          <- The top-level README for developers using this project.
    β”œβ”€β”€ data
    β”‚Β Β  β”œβ”€β”€ processed      <- The final, canonical data sets for modeling.
    β”‚Β Β  └── raw            <- The original, immutable data dump
    β”‚
    β”œβ”€β”€ models             <- Trained and serialized models, model predictions, or model summaries
    β”‚
    β”œβ”€β”€ notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
    β”‚                         the creator's initials, and a short `-` delimited description, e.g.
    β”‚                         `1.0-jqp-initial-data-exploration`.
    β”œβ”€β”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
    β”œβ”€β”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
    β”‚Β Β  └── figures        <- Generated graphics and figures to be used in reporting
    β”‚Β Β  └── metrics.txt    <- Relevant metrics after evaluating the model.
    β”‚Β Β  └── training_metrics.txt    <- Relevant metrics from training the model.
    β”‚
    β”œβ”€β”€ requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
    β”‚                         generated with `pip freeze > requirements.txt`
    β”‚
    β”œβ”€β”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
    β”œβ”€β”€ src                <- Source code for use in this project.
    β”‚Β Β  β”œβ”€β”€ __init__.py    <- Makes src a Python module
    β”‚   β”‚
    β”‚Β Β  β”œβ”€β”€ data           <- Scripts to download or generate data
    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ great_expectations  <- Folder containing data integrity check files
    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ make_dataset.py
    β”‚Β Β  β”‚Β Β  └── data_validation.py  <- Script to run data integrity checks
    β”‚   β”‚
    β”‚Β Β  β”œβ”€β”€ models         <- Scripts to train models and then use trained models to make
    β”‚   β”‚   β”‚                 predictions
    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ predict_model.py
    β”‚Β Β  β”‚Β Β  └── train_model.py
    β”‚   β”‚
    β”‚Β Β  └── visualization  <- Scripts to create exploratory and results oriented visualizations
    β”‚Β Β      └── visualize.py
    β”‚
    β”œβ”€β”€ .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.
    β”œβ”€β”€ dvc.lock           <- constructs the ML pipeline with defined stages.
    └── dvc.yaml           <- Traing a model on the processed data.


--------

<p><small>Project based on the <a target="_blank" href="https://drivendata.github.io/cookiecutter-data-science/">cookiecutter data science project template</a>. #cookiecutterdatascience</small></p>


---

To create a project like this, just go to https://dagshub.com/repo/create and select the **Cookiecutter DVC** project template.

Made with 🐢 by [DAGsHub](https://dagshub.com/).