File size: 6,372 Bytes
6bd3c11
 
 
e869348
3ae4169
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e869348
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---

license: apache-2.0
---


Set the Cookiecutter-MLOps in Hugging Face
==============================================

 1. Create Model repository in Hugging Face (e.g. myHFrepo)
 2. In your local directory run:
	

'''bash

cd /path/to/parent directory of project folder

git clone git@hf.co:USERNAME/myHFrepo

'''

	

	For ssh connection check here

	cd myHFrepo

	python -m venv jointvenv


	source jointvenv/bin/activate



Follow instructions on DagsHub/Cookiecutter-MLOps | DagsHub
	git clone https://dagshub.com/DagsHub/Cookiecutter-MLOps.git

	rm -r /path/to/myHFrepo/Cookiecutter-MLOps/.git

	cat /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes >> /path/to/myHFrepo/.gitattributes

	rm /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes

	cat /path/to/myHFrepo/Cookiecutter-MLOps/README >> /path/to/myHFrepo/README

	rm /path/to/myHFrepo/Cookiecutter-MLOps/README

	git add README.md

	git commit -m "Paste README info from DagsHub/Cookiecutter-MLOps"

	git add .gitattributes

	git commit -m "Paste .gitattributes info from DagsHub/Cookiecutter-MLOps"

cd /path/to/myHFrepo/Cookiecutter-MLOps

mv * .[^.]* ..

cd /path/to/myHFrepo

	rm -r /path/to/myHFrepo/Cookiecutter-MLOps

	echo '' >> .gitignore

	echo '#'Virtual Environment >> .gitignore

echo jointvenv/ >> .gitignore

	git add .

	git commit -m "add remaining DagsHub/Cookiecutter-MLOps repo content"

	make dirs

make requirements

mv requirements.txt requirementsCookiecutter-MLOps.txt

git add requirementsCookiecutter-MLOps.txt

git commit -m "external requirements from Cookiecutter-

MLOps"

pip freeze > requirements.txt

git add requirements.txt

git commit -m "First report venv requirements"

git push origin main


 Create Model repository in your Hugging Face organization (e.g. myHFrepo)

git remote add dcc git@hf.co:MYORG/mywslHFrepo
git pull dcc main --allow-unrelated-histories
Resolve conflicts in .gitattributes and README.md
git add .
git commit -m "Merge HuggingFace individual and organization repos"
git push dcc main





Cookiecutter-MLOps
==============================

A cookiecutter template employing MLOps best practices, so you can focus on building machine learning products while
having MLOps best practices applied.

Instructions
------------
1. Clone the repo.
2. Run `make dirs` to create the missing parts of the directory structure described below.
3. *Optional:* Run `make virtualenv` to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run `source env/bin/activate` to activate the virtualenv.
4. Run `make requirements` to install required python packages.
5. Put the raw data in `data/raw`.
6. To save the raw data to the DVC cache, run `dvc add data/raw`
7. Edit the code files to your heart's desire.
8. Process your data, train and evaluate your model using `dvc repro` or `make reproduce`
9. To run the pre-commit hooks, run `make pre-commit-install`
10. For setting up data validation tests, run `make setup-setup-data-validation`
11. For **running** the data validation tests, run `make run-data-validation`
12. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization
------------

    β”œβ”€β”€ LICENSE

    β”œβ”€β”€ Makefile           <- Makefile with commands like `make dirs` or `make clean`

    β”œβ”€β”€ README.md          <- The top-level README for developers using this project.

    β”œβ”€β”€ data

    β”‚Β Β  β”œβ”€β”€ processed      <- The final, canonical data sets for modeling.

    β”‚Β Β  └── raw            <- The original, immutable data dump

    β”‚

    β”œβ”€β”€ models             <- Trained and serialized models, model predictions, or model summaries

    β”‚

    β”œβ”€β”€ notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),

    β”‚                         the creator's initials, and a short `-` delimited description, e.g.

    β”‚                         `1.0-jqp-initial-data-exploration`.

    β”œβ”€β”€ references         <- Data dictionaries, manuals, and all other explanatory materials.

    β”œβ”€β”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.

    β”‚Β Β  └── figures        <- Generated graphics and figures to be used in reporting

    β”‚Β Β  └── metrics.txt    <- Relevant metrics after evaluating the model.

    β”‚Β Β  └── training_metrics.txt    <- Relevant metrics from training the model.

    β”‚

    β”œβ”€β”€ requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.

    β”‚                         generated with `pip freeze > requirements.txt`

    β”‚

    β”œβ”€β”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported

    β”œβ”€β”€ src                <- Source code for use in this project.

    β”‚Β Β  β”œβ”€β”€ __init__.py    <- Makes src a Python module

    β”‚   β”‚

    β”‚Β Β  β”œβ”€β”€ data           <- Scripts to download or generate data

    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ great_expectations  <- Folder containing data integrity check files

    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ make_dataset.py

    β”‚Β Β  β”‚Β Β  └── data_validation.py  <- Script to run data integrity checks

    β”‚   β”‚

    β”‚Β Β  β”œβ”€β”€ models         <- Scripts to train models and then use trained models to make

    β”‚   β”‚   β”‚                 predictions

    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ predict_model.py

    β”‚Β Β  β”‚Β Β  └── train_model.py

    β”‚   β”‚

    β”‚Β Β  └── visualization  <- Scripts to create exploratory and results oriented visualizations

    β”‚Β Β      └── visualize.py

    β”‚

    β”œβ”€β”€ .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.

    β”œβ”€β”€ dvc.lock           <- constructs the ML pipeline with defined stages.

    └── dvc.yaml           <- Traing a model on the processed data.



--------

<p><small>Project based on the <a target="_blank" href="https://drivendata.github.io/cookiecutter-data-science/">cookiecutter data science project template</a>. #cookiecutterdatascience</small></p>


---

To create a project like this, just go to https://dagshub.com/repo/create and select the **Cookiecutter DVC** project template.

Made with 🐢 by [DAGsHub](https://dagshub.com/).