diff --git "a/how-to-guides.txt" "b/how-to-guides.txt" --- "a/how-to-guides.txt" +++ "b/how-to-guides.txt" @@ -2,18 +2,18 @@ # ZenML Documentation Summary -**ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It separates infrastructure from code, enhancing collaboration among developers. +**ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It separates infrastructure from code, facilitating collaboration among developers. ## Key Features ### For MLOps Platform Engineers -- **ZenML Pro**: Offers a managed instance with features like CI/CD, Model Control Plane, and RBAC. -- **Self-hosted Deployment**: Deploy ZenML on any cloud provider using Terraform. +- **ZenML Pro**: Offers a managed control plane with features like CI/CD, Model Control Plane, and RBAC. +- **Self-hosted Deployment**: Deploy ZenML on any cloud provider using Terraform utilities. ```bash zenml stack register --provider aws zenml stack deploy --provider gcp ``` -- **Standardization**: Register staging and production environments as ZenML stacks for consistent ML workflows. +- **Standardization**: Register environments as ZenML stacks for consistent MLOps tooling. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... @@ -31,9 +31,9 @@ ```bash python run.py # Local development zenml stack set production - python run.py # Production run + python run.py # Run on production ``` -- **Pythonic SDK**: Use decorators to convert Python functions into ZenML pipelines. +- **Pythonic SDK**: Use decorators to create pipelines. ```python from zenml import pipeline, step @@ -43,7 +43,7 @@ @step def step_2(input_one: str, input_two: str) -> None: - print(input_one + ' ' + input_two) + print(f"{input_one} {input_two}") @pipeline def my_pipeline(): @@ -51,60 +51,66 @@ my_pipeline() ``` -- **Automatic Metadata Tracking**: ZenML tracks metadata and versions datasets and models. +- **Automatic Metadata Tracking**: Tracks metadata of runs and versions datasets/models. ### For ML Engineers -- **ML Lifecycle Management**: Manage ML workflows and environments easily. +- **ML Lifecycle Management**: Manage ML workflows and environments efficiently. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` -- **Reproducibility**: Automatically track and version all components for easy result reproduction. -- **Automated Deployments**: Define workflows as ZenML pipelines for automatic deployment to services like Seldon. +- **Reproducibility**: Automatically track and version stacks, pipelines, and artifacts. +- **Automated Deployments**: Define workflows as ZenML pipelines for easy deployment. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @pipeline def my_pipeline(): - model = model_trainer_step(data_loader_step()) + data = data_loader_step() + model = model_trainer_step(data) seldon_model_deployer_step(model) ``` ## Additional Resources -- **For MLOps Engineers**: [ZenML Pro](getting-started/zenml-pro/README.md), [Cloud Orchestration Guide](user-guide/production-guide/cloud-orchestration.md) -- **For Data Scientists**: [Core Concepts](getting-started/core-concepts.md), [Starter Guide](user-guide/starter-guide/) -- **For ML Engineers**: [How To](./how-to/pipeline-development/build-pipelines/README.md), [Examples](https://github.com/zenml-io/zenml-projects) +- **Learn More**: Explore guides on production setup, core concepts, and examples through the ZenML documentation. -Explore more at [ZenML Live Demo](https://www.zenml.io/live-demo). +ZenML integrates with popular tools like Weights & Biases, MLflow, and Neptune for enhanced experiment tracking and reproducibility. ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === -### Summary of ZenML Model Control Plane Documentation +### ZenML Model Control Plane Overview -#### Overview of ZenML Model -- A **ZenML Model** is an entity that groups pipelines, artifacts, metadata, and business data, representing the business logic of an ML product. -- Models are central to ZenML and can be managed via the ZenML API, client, or ZenML Pro dashboard. +**ZenML Model Definition**: +- A `Model` in ZenML is an entity that groups pipelines, artifacts, metadata, and business data, encapsulating the business logic of an ML product. It includes technical models (model files with weights and parameters), training data, and predictions. -#### Key Features -- **Model Versions**: Each model can have multiple versions, allowing for tracking of iterations. -- **Artifacts**: Associated artifacts include technical models, training data, and predictions. +**Model Management**: +- Models are first-class citizens in ZenML, accessible via the ZenML API and the ZenML Pro dashboard. +- **CLI Commands**: + - List models: `zenml model list` + - List model versions: `zenml model version list ` + - List associated pipeline runs and artifacts: + - `zenml model version runs ` + - `zenml model version data_artifacts ` + - `zenml model version model_artifacts ` + - `zenml model version deployment_artifacts ` -#### Viewing Models -- **CLI**: Use `zenml model list` to list all models. -- **Dashboard**: The ZenML Pro dashboard provides visualization capabilities for models. +### Configuring a Model in a Pipeline -#### Configuring Models in Pipelines -- Models can be linked to pipelines, ensuring all generated artifacts are associated with the specified model. - -**Example Code:** +- To link artifacts generated during a pipeline run to a model, pass a `Model` object in the pipeline or step configuration. This provides lineage tracking. + +**Example Code**: ```python from zenml import pipeline, Model -model = Model(name="iris_classifier", version=None) +model = Model(name="iris_classifier", version=None, license="Apache 2.0", description="A classification model for the iris dataset.") + +@step(model=model) +def svc_trainer(...): + ... @pipeline(model=model) def training_pipeline(gamma: float = 0.002): @@ -115,61 +121,58 @@ if __name__ == "__main__": training_pipeline() ``` -#### Fetching Models in Pipelines -- Models can be accessed via `get_step_context()` or `get_pipeline_context()`. +### Fetching the Model in a Pipeline -**Example Code:** +- Models can be accessed via `StepContext` or `PipelineContext`. + +**Example Code**: ```python -from zenml import get_step_context, step, pipeline +from zenml import get_step_context, get_pipeline_context, step, pipeline @step -def svc_trainer(X_train, y_train): +def svc_trainer(X_train, y_train, gamma=0.001): model = get_step_context().model - ... @pipeline(model=Model(name="iris_classifier", version="production")) -def training_pipeline(): +def training_pipeline(gamma=0.002): model = get_pipeline_context().model ``` -#### Logging Metadata -- Metadata can be logged to models using `log_model_metadata`. +### Logging Metadata to the Model -**Example Code:** +- Metadata can be logged to a model using `log_model_metadata`. + +**Example Code**: ```python -from zenml import get_step_context, step, log_model_metadata +from zenml import get_step_context, step, log_model_metadata @step -def svc_trainer(X_train, y_train): +def svc_trainer(X_train, y_train, gamma=0.001): model = get_step_context().model log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` -#### Retrieving Metadata -- Metadata can be retrieved using the ZenML client. +### Model Stages -**Example Code:** -```python -from zenml.client import Client -model = Client().get_model_version("iris_classifier") -print(model.run_metadata["accuracy"].value) -``` +- Models can exist in various stages: `staging`, `production`, `latest`, and `archived`. -#### Model Stages -- Models can exist in stages: `staging`, `production`, `latest`, and `archived`. - -**Example Code for Stage Management:** +**Example Code**: ```python +from zenml import Model + model = Model(name="iris_classifier", version="latest") model.set_stage(stage="production", force=True) ``` -#### CLI Commands for Model Stages -- List staging models: `zenml model version list --stage staging` -- Update to production: `zenml model version update -s production` +**CLI Commands**: +```shell +zenml model version list --stage staging +zenml model version update -s production +``` -#### Conclusion -ZenML's Model Control Plane allows for effective management of ML models, their versions, and associated metadata, enhancing traceability and reproducibility in ML workflows. For more details, refer to the dedicated Model Management guide. +### Conclusion + +ZenML's Model Control Plane provides robust features for managing ML models, including configuration, metadata logging, and versioning. For detailed exploration, refer to the [Model Management guide](../../how-to/model-management-metrics/model-control-plane/README.md). ================================================== @@ -177,147 +180,120 @@ ZenML's Model Control Plane allows for effective management of ML models, their ### ZenML Artifact Management Overview -ZenML automates the versioning and management of artifacts—data, models, and evaluations—within machine learning workflows, ensuring reproducibility and traceability. +ZenML automates the versioning and management of artifacts in machine learning workflows, ensuring reproducibility and traceability. This documentation covers key aspects of managing artifacts produced by ZenML pipelines, including naming, versioning, metadata, and consuming artifacts. #### Managing Artifacts -- **Artifact Naming**: Use the `Annotated` object to assign human-readable names to outputs for better discoverability. - - ```python - from typing_extensions import Annotated - import pandas as pd - from sklearn.datasets import load_iris - from zenml import pipeline, step +1. **Artifact Naming**: + - Use the `Annotated` object to assign human-readable names to outputs. + - Default naming pattern: `{pipeline_name}::{step_name}::output`. - @step - def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: - iris = load_iris(as_frame=True) - return iris.get("frame") + ```python + from typing_extensions import Annotated + import pandas as pd + from sklearn.datasets import load_iris + from zenml import pipeline, step - @pipeline - def feature_engineering_pipeline(): - training_data_loader() - ``` + @step + def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: + iris = load_iris(as_frame=True) + return iris.get("frame") -- **Default Naming**: Unnamed outputs default to `{pipeline_name}::{step_name}::output`. + @pipeline + def feature_engineering_pipeline(): + training_data_loader() + ``` -- **Versioning**: ZenML auto-increments artifact versions. Custom versions can be specified using `ArtifactConfig`. +2. **Versioning Artifacts**: + - Artifacts are automatically versioned (e.g., `iris_dataset` will have versions "1", "2", etc.). + - Custom versions can be defined using `ArtifactConfig`. - ```python - from zenml import step, ArtifactConfig + ```python + from zenml import step, ArtifactConfig - @step - def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: - ... - ``` + @step + def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: + ... + ``` -- **Metadata and Tags**: Extend artifacts with metadata and tags using `ArtifactConfig` or `get_step_context()`. +3. **Adding Metadata and Tags**: + - Metadata and tags can be added to artifacts for better organization. - ```python - @step - def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"key": "value"}, tags=["tag"])]: - return "string" - ``` + ```python + from zenml import step, get_step_context, ArtifactConfig + from typing_extensions import Annotated -#### Comparing Metadata (Pro Feature) + @step + def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"])]: + return "string" + ``` + +#### Comparing Metadata Across Runs (Pro) -ZenML Pro offers an Experiment Comparison tool to visualize metadata across runs in two views: **Table View** (structured comparison) and **Parallel Coordinates View** (relationship identification). +- The ZenML Pro dashboard includes an Experiment Comparison tool for analyzing metadata across pipeline runs. +- Two views available: **Table View** (structured comparison) and **Parallel Coordinates View** (relationship identification). #### Artifact Types -Specify artifact types to enhance dashboard visibility and filtering: +- Specify artifact types for better filtering and visualization in the dashboard. -```python -from zenml import ArtifactConfig, step -from zenml.enums import ArtifactType + ```python + from zenml import ArtifactConfig, step + from zenml.enums import ArtifactType -@step -def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: - return MyCustomModel(...) -``` + @step + def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: + return MyCustomModel(...) + ``` #### Consuming External Artifacts -Use `ExternalArtifact` to integrate data not produced by ZenML, such as from external sources: - -```python -from zenml import ExternalArtifact, pipeline, step - -@step -def print_data(data: np.ndarray): - print(data) - -@pipeline -def printing_pipeline(): - data = ExternalArtifact(value=np.array([0])) - print_data(data=data) -``` - -#### Fetching Artifacts from Other Pipelines +- Use `ExternalArtifact` to initialize artifacts from external sources. -Utilize the `Client` to fetch artifacts by ID, name, or version within a pipeline: - -```python -from zenml.client import Client - -@step -def trainer(dataset: pd.DataFrame): - ... - -@pipeline -def training_pipeline(): - client = Client() - dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") - trainer(dataset=dataset_artifact) -``` - -#### Managing External Artifacts - -You can save predictions or other artifacts created outside ZenML: - -```python -from zenml.client import Client, save_artifact - -model = ... -prediction = model.predict([[1, 1, 1, 1]]) -save_artifact(prediction, name="iris_predictions") -``` + ```python + import numpy as np + from zenml import ExternalArtifact, pipeline, step -#### Linking Existing Data + @step + def print_data(data: np.ndarray): + print(data) -Link external data as ZenML artifacts: + @pipeline + def printing_pipeline(): + data = ExternalArtifact(value=np.array([0])) + print_data(data=data) + ``` -```python -from zenml.client import Client, register_artifact -from pytorch_lightning import Trainer +#### Managing Artifacts Not Produced by ZenML -prefix = Client().active_stack.artifact_store.path -default_root_dir = os.path.join(prefix, uuid4().hex) +- Artifacts can be created externally and registered in ZenML. -trainer = Trainer(default_root_dir=default_root_dir) -trainer.fit(model) + ```python + from zenml.client import Client, save_artifact -register_artifact(default_root_dir, name="all_my_model_checkpoints") -``` + model = ... + prediction = model.predict([[1, 1, 1, 1]]) + save_artifact(prediction, name="iris_predictions") + ``` -#### Logging Metadata +#### Logging Metadata for Artifacts -Log metadata for artifacts using `log_artifact_metadata`: +- Associate metadata with artifacts for better understanding and tracking. -```python -from zenml import step, log_artifact_metadata + ```python + from zenml import step, log_artifact_metadata -@step -def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: - model.fit(dataset[0], dataset[1]) - accuracy = model.score(dataset[0], dataset[1]) - log_artifact_metadata(metadata={"accuracy": float(accuracy)}) - return model -``` + @step + def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: + model.fit(dataset[0], dataset[1]) + accuracy = model.score(dataset[0], dataset[1]) + log_artifact_metadata(metadata={"accuracy": float(accuracy)}) + return model + ``` -### Code Example +### Example Code -A complete example demonstrating the above concepts: +A complete example demonstrating artifact management: ```python from typing import Optional, Tuple @@ -329,27 +305,27 @@ from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata, save_artifact, load_artifact, Client @step -def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset")]: +def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset", tags=["digits"])]: digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step -def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: +def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline -def model_finetuning_pipeline(dataset_version: Optional[str] = None): +def model_finetuning_pipeline(dataset_version: Optional[str] = None, model_version: Optional[str] = None): client = Client() dataset = client.get_artifact_version(name_id_or_prefix="my_dataset", version=dataset_version) if dataset_version else versioned_data_loader_step() - model = client.get_artifact_version(name_id_or_prefix="my_model") + model = client.get_artifact_version(name_id_or_prefix="my_model", version=model_version) model_finetuner_step(model=model, dataset=dataset) def main(): untrained_model = SVC(gamma=0.001) - save_artifact(untrained_model, name="my_model", version="1") + save_artifact(untrained_model, name="my_model", version="1", tags=["SVC", "untrained"]) model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") latest_trained_model = load_artifact("my_model") @@ -360,130 +336,184 @@ if __name__ == "__main__": main() ``` -This code demonstrates the creation and management of datasets and models, including versioning and metadata logging. +This example illustrates the creation and management of datasets and models, including versioning and metadata logging. For more details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md === -### Summary of ZenML Documentation on Creating ML Pipelines +# ZenML Documentation Summary -**Overview:** -ZenML simplifies the creation of production-ready ML pipelines by decoupling stages such as data ingestion, preprocessing, and model evaluation into modular **Steps** that can be integrated into an end-to-end **Pipeline**. This structure enhances manageability, reusability, and scalability. +## Overview +ZenML facilitates the creation and management of modular, scalable machine learning (ML) pipelines by decoupling stages like data ingestion, preprocessing, and model evaluation. Each stage is represented as a **Step**, which can be integrated into an end-to-end **Pipeline**. -**Installation:** -To get started with ZenML, install it using: +## Installation +To get started, install ZenML: ```shell pip install "zenml[server]" zenml login --local # Launches the dashboard locally ``` -**Simple ML Pipeline Example:** -A basic pipeline can be created with the following components: +## Simple ML Pipeline Example +A basic ML pipeline can be set up using ZenML. Below is an example that demonstrates loading data and training a model. -1. **Load Data Step:** - ```python - @step - def load_data() -> dict: - training_data = [[1, 2], [3, 4], [5, 6]] - labels = [0, 1, 0] - return {'features': training_data, 'labels': labels} - ``` +### Code Example +```python +from zenml import pipeline, step -2. **Train Model Step:** - ```python - @step - def train_model(data: dict) -> None: - total_features = sum(map(sum, data['features'])) - total_labels = sum(data['labels']) - print(f"Trained model using {len(data['features'])} data points. " - f"Feature sum is {total_features}, label sum is {total_labels}") - ``` +@step +def load_data() -> dict: + training_data = [[1, 2], [3, 4], [5, 6]] + labels = [0, 1, 0] + return {'features': training_data, 'labels': labels} -3. **Pipeline Definition:** - ```python - @pipeline - def simple_ml_pipeline(): - dataset = load_data() - train_model(dataset) - ``` +@step +def train_model(data: dict) -> None: + total_features = sum(map(sum, data['features'])) + total_labels = sum(data['labels']) + print(f"Trained model using {len(data['features'])} data points. " + f"Feature sum is {total_features}, label sum is {total_labels}") -4. **Execution:** - ```python - if __name__ == "__main__": - run = simple_ml_pipeline() - ``` +@pipeline +def simple_ml_pipeline(): + dataset = load_data() + train_model(dataset) -**Dashboard Exploration:** -After running the pipeline, use `zenml login --local` to access the ZenML Dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/). Log in with the username **"default"** to view execution history and artifacts. +if __name__ == "__main__": + run = simple_ml_pipeline() +``` -**Understanding Steps and Artifacts:** -Each function executed in the pipeline is represented as a `step` in a Directed Acyclic Graph (DAG). Artifacts are the outputs from these steps, which ZenML automatically tracks and versions. +### Running the Pipeline +Execute the script with: +```bash +$ python run.py +``` +This will initiate the pipeline and display execution details in the terminal. -**Expanding to a Full ML Workflow:** -To create a more complex workflow using the Iris dataset and a Support Vector Classifier (SVC): +## Dashboard +After execution, view results in the ZenML Dashboard by running: +```bash +zenml login --local +``` +Access the dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/) and log in with the username **"default"**. -1. **Imports:** - ```python - from typing_extensions import Annotated, Tuple - import pandas as pd - from sklearn.datasets import load_iris - from sklearn.model_selection import train_test_split - from sklearn.svm import SVC - from zenml import pipeline, step - ``` +## Steps and Artifacts +Each function in the pipeline is a `step`, and they are connected by `artifacts`, which are the outputs of one step used as inputs to another. ZenML automatically tracks these artifacts and their configurations for reproducibility. -2. **Data Loader Step:** - ```python - @step - def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], ...]: - iris = load_iris(as_frame=True) - return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) - ``` +## Full ML Workflow Example +To expand to a complete ML workflow, use the Iris dataset and train a Support Vector Classifier (SVC). -3. **Training Step:** - ```python - @step - def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], ...]: - model = SVC(gamma=gamma) - model.fit(X_train.to_numpy(), y_train.to_numpy()) - return model, model.score(X_train.to_numpy(), y_train.to_numpy()) - ``` +### Requirements +Install necessary packages: +```bash +pip install matplotlib +zenml integration install sklearn -y +``` -4. **Pipeline Definition:** - ```python - @pipeline - def training_pipeline(gamma: float = 0.002): - X_train, X_test, y_train, y_test = training_data_loader() - svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) - ``` +### Data Loader with Multiple Outputs +Define a data loader step: +```python +from typing_extensions import Annotated, Tuple +import pandas as pd +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +import logging -5. **Execution:** - ```python - if __name__ == "__main__": - training_pipeline() - ``` +@step +def training_data_loader() -> Tuple[ + Annotated[pd.DataFrame, "X_train"], + Annotated[pd.DataFrame, "X_test"], + Annotated[pd.Series, "y_train"], + Annotated[pd.Series, "y_test"], +]: + logging.info("Loading iris...") + iris = load_iris(as_frame=True) + X_train, X_test, y_train, y_test = train_test_split( + iris.data, iris.target, test_size=0.2, random_state=42 + ) + return X_train, X_test, y_train, y_test +``` + +### Parameterized Training Step +Create a training step for the SVC: +```python +from sklearn.base import ClassifierMixin +from sklearn.svm import SVC + +@step +def svc_trainer( + X_train: pd.DataFrame, + y_train: pd.Series, + gamma: float = 0.001, +) -> Tuple[ + Annotated[ClassifierMixin, "trained_model"], + Annotated[float, "training_acc"], +]: + model = SVC(gamma=gamma) + model.fit(X_train.to_numpy(), y_train.to_numpy()) + train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) + print(f"Train accuracy: {train_acc}") + return model, train_acc +``` -**Configuration with YAML:** -You can configure pipeline runs using a YAML file: +### Pipeline Definition +Combine steps into a pipeline: ```python -training_pipeline = training_pipeline.with_options(config_path='/local/path/to/config.yaml') +@pipeline +def training_pipeline(gamma: float = 0.002): + X_train, X_test, y_train, y_test = training_data_loader() + svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) + +if __name__ == "__main__": + training_pipeline(gamma=0.0015) +``` + +### YAML Configuration +Configure pipeline runs using a YAML file: +```python +training_pipeline = training_pipeline.with_options( + config_path='/local/path/to/config.yaml' +) training_pipeline() ``` -A simple YAML configuration might look like: +Example YAML file: ```yaml parameters: gamma: 0.01 ``` -Generate a template config file with: + +### Full Code Example +The complete code for the workflow is as follows: ```python -training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml') -``` +from typing_extensions import Tuple, Annotated +import pandas as pd +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.base import ClassifierMixin +from sklearn.svm import SVC +from zenml import pipeline, step + +@step +def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: + iris = load_iris(as_frame=True) + return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) + +@step +def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: + model = SVC(gamma=gamma) + model.fit(X_train.to_numpy(), y_train.to_numpy()) + return model, model.score(X_train.to_numpy(), y_train.to_numpy()) + +@pipeline +def training_pipeline(gamma: float = 0.002): + X_train, X_test, y_train, y_test = training_data_loader() + svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) -**Complete Code Example:** -The full code for the Iris dataset SVC pipeline is provided in the documentation, combining all the steps and configurations discussed. +if __name__ == "__main__": + training_pipeline() +``` -This summary encapsulates the essential components and functionalities of ZenML for creating and managing ML pipelines while ensuring clarity and conciseness. +This summary captures the essential technical details and steps for creating and managing ML pipelines using ZenML, ensuring clarity and conciseness. ================================================== @@ -491,43 +521,42 @@ This summary encapsulates the essential components and functionalities of ZenML ### Summary of ZenML Caching Documentation -**Overview:** -ZenML facilitates rapid development of machine learning pipelines through step caching, which reuses outputs from previous runs when inputs, parameters, or code remain unchanged. Caching is enabled by default, allowing for efficient execution, especially when running pipelines without a schedule. - -**Key Points:** -- **Caching Behavior:** ZenML automatically caches the outputs of steps unless there are changes in inputs, parameters, or code. This is beneficial for saving time and resources during remote executions. -- **Manual Caching Control:** Users must manually disable caching for steps reliant on external inputs or file-system changes using `@step(enable_cache=False)`. - -**Configuring Caching:** -1. **Pipeline Level:** - - Set caching behavior in the `@pipeline` decorator: - ```python - @pipeline(enable_cache=False) - def first_pipeline(...): - ... - ``` - - This disables caching for all steps unless overridden at the step level. +**Overview**: ZenML enhances machine learning pipeline development through caching, allowing for quicker iterations by reusing outputs from previous runs when inputs, parameters, or code remain unchanged. -2. **Runtime Configuration:** - - Override caching settings at runtime: - ```python - first_pipeline = first_pipeline.with_options(enable_cache=False) - ``` +**Key Points**: +- **Caching Behavior**: + - Caching is enabled by default in ZenML. + - Outputs are stored in the artifact store, allowing steps to be skipped if they haven't changed. + - If no changes occur, ZenML will use cached outputs, saving time and resources. + - To disable client-side caching, set the environment variable `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. + +- **Manual Caching Control**: + - Caching does not automatically detect external changes. Use `enable_cache=False` for steps dependent on external inputs: + ```python + @step(enable_cache=False) + def load_data_from_external_system(...): + # This step will always run + ``` -3. **Step Level:** - - Configure caching for individual steps: - ```python - @step(enable_cache=False) - def import_data_from_api(...): - ... - ``` - - Use `with_options` for dynamic control: - ```python - import_data_from_api = import_data_from_api.with_options(enable_cache=False) - ``` +- **Configuring Caching**: + - **Pipeline Level**: Set caching in the `@pipeline` decorator: + ```python + @pipeline(enable_cache=False) + def first_pipeline(...): + """Pipeline with cache disabled""" + ``` + - **Dynamic Configuration**: Override caching settings at runtime: + ```python + first_pipeline = first_pipeline.with_options(enable_cache=False) + ``` + - **Step Level**: Control caching for individual steps: + ```python + @step(enable_cache=False) + def import_data_from_api(...): + """Import most up-to-date data from public API""" + ``` -**Example Code:** -The following code demonstrates caching in a simple ZenML pipeline: +**Code Example**: The following script demonstrates caching behavior in a ZenML pipeline: ```python from typing_extensions import Tuple, Annotated @@ -559,34 +588,35 @@ def training_pipeline(gamma: float = 0.002): if __name__ == "__main__": training_pipeline() - logger.info("First step cached, second not due to parameter change") + logger.info("\n\nFirst step cached, second not due to parameter change") training_pipeline(gamma=0.0001) svc_trainer = svc_trainer.with_options(enable_cache=False) - logger.info("First step cached, second not due to settings") + logger.info("\n\nFirst step cached, second not due to settings") training_pipeline() - logger.info("Caching disabled for the entire pipeline") + logger.info("\n\nCaching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ``` -This example illustrates how caching works in ZenML, including how to enable and disable it at various levels. +This script illustrates how caching works in ZenML, including how to disable it at various levels. ================================================== === File: docs/book/user-guide/starter-guide/starter-project.md === -### Starter Project Overview +### Summary of ZenML Starter Project Documentation -This documentation outlines a simple starter project to apply foundational MLOps concepts, including pipelines, artifacts, and models. +This documentation provides a guide for initiating a simple MLOps project using ZenML. Key components of an MLOps system covered include pipelines, artifacts, and models. #### Getting Started -1. **Set Up Environment**: Create a fresh virtual environment and install necessary dependencies: +1. **Create a Virtual Environment**: Start with a fresh environment without dependencies. +2. **Install Dependencies**: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` -2. **Initialize Project**: Use ZenML templates to set up the project: +3. **Initialize Project with ZenML Templates**: ```bash mkdir zenml_starter cd zenml_starter @@ -594,7 +624,7 @@ This documentation outlines a simple starter project to apply foundational MLOps pip install -r requirements.txt ``` - **Alternative Setup**: If the above steps fail, clone the ZenML MLOps starter example: + **Alternative Method**: Clone the MLOps starter example if the above does not work: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter @@ -604,14 +634,14 @@ This documentation outlines a simple starter project to apply foundational MLOps #### Learning Outcomes -By following the project, you will execute three key pipelines: +By following the guide or the accompanying Jupyter notebook, you will execute three pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. - **Training Pipeline**: Trains a model using the preprocessed dataset. -- **Batch Inference Pipeline**: Runs predictions on new data with the trained model. +- **Batch Inference Pipeline**: Runs predictions on new data using the trained model. -#### Next Steps +#### Conclusion and Next Steps -After completing the project, consider introducing the ZenML starter template to your team to leverage a standardized MLOps framework. For further learning, experiment with ZenML and proceed to the [production guide](../production-guide/) for advanced topics. +This concludes the introductory chapter of your MLOps journey with ZenML. Experiment with ZenML to solidify your understanding, and when ready, proceed to the [production guide](../production-guide/) for advanced topics. ================================================== @@ -619,109 +649,111 @@ After completing the project, consider introducing the ZenML starter template to # ZenML Starter Guide Summary -The ZenML Starter Guide is designed for MLOps engineers and data scientists looking to build robust ML platforms. It provides foundational knowledge of the ZenML framework and tools for managing machine learning operations. +The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools for managing machine learning operations. -### Key Topics Covered: -- **Creating Your First ML Pipeline**: Learn to set up and execute a basic ML pipeline. -- **Understanding Caching**: Explore how to cache results between pipeline steps for efficiency. -- **Managing Data and Versioning**: Understand data management and version control. -- **Tracking ML Models**: Learn to track and manage machine learning models effectively. +## Key Topics Covered: +- **Creating Your First ML Pipeline**: Instructions on building a basic ML pipeline. +- **Understanding Caching Between Pipeline Steps**: Techniques for optimizing pipeline execution. +- **Managing Data and Data Versioning**: Best practices for handling datasets and their versions. +- **Tracking Your Machine Learning Models**: Methods for monitoring and managing ML models. -### Prerequisites: -- A Python environment. -- `virtualenv` installed. +## Prerequisites: +- A Python environment set up. +- `virtualenv` installed for project isolation. -By the end of the guide, users will complete a starter project, marking their entry into MLOps with ZenML. Prepare your development environment and begin your journey! +By the end of the guide, users will complete a starter project, marking the beginning of their MLOps journey with ZenML. This guide serves as both an introduction to ZenML and a foundational resource for MLOps practices. ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === -### Managing the Lifecycle of a ZenML Pipeline with CI/CD - -#### Overview -This documentation outlines the setup of Continuous Integration and Delivery (CI/CD) for ZenML pipelines, transitioning from local execution to a centralized workflow engine integrated with GitHub Actions. This enables automated testing and deployment of code changes after peer review. +# Managing ZenML Pipeline Lifecycle with CI/CD -#### Key Steps to Set Up CI/CD +## Overview +This documentation outlines how to manage the lifecycle of a ZenML pipeline using Continuous Integration (CI) and Continuous Delivery (CD) through GitHub Actions. It emphasizes the transition from local execution to a centralized workflow engine for automated testing and deployment. -1. **Configure an API Key in ZenML** - - Create an API key for machine-to-machine connections: - ```bash - zenml service-account create github_action_api_key - ``` - - Store the generated API key securely as it will not be displayed again. +## Setting Up CI/CD +To implement CI/CD, follow these steps: -2. **Set Up Secrets in GitHub** - - Store the `ZENML_API_KEY` in GitHub secrets for use in GitHub Actions. Refer to [GitHub documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) for details. +1. **Create an API Key in ZenML**: + Use the command below to generate an API key for machine-to-machine connections: + ```bash + zenml service-account create github_action_api_key + ``` + This will return an API key that must be stored securely. -3. **(Optional) Configure Staging and Production Stacks** - - Use different stacks for staging and production if needed. This may involve different data sources or configuration files for various environments. +2. **Configure GitHub Secrets**: + Store the generated `ZENML_API_KEY` in your GitHub repository secrets. This allows secure access to the API key during CI/CD operations. -4. **Trigger a Pipeline on Pull Requests** - - Set up a GitHub Action workflow to run the pipeline automatically on code changes. Use the following configuration to trigger on pull requests: - ```yaml - on: - pull_request: - branches: [ staging, main ] - ``` +3. **(Optional) Set Up Staging and Production Stacks**: + You can configure different stacks for staging and production. This may involve using different data sources or configuration files for each environment. -5. **Define Job Steps in the Workflow** - - Here’s a simplified version of the job configuration: - ```yaml - jobs: - run-staging-workflow: - runs-on: run-zenml-pipeline - env: - ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} - ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} - ZENML_STACK: stack_name - ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} - ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} - ``` +4. **Trigger Pipeline on Pull Requests**: + Set up a GitHub Action to run your pipeline automatically on pull requests. Use the following YAML configuration: + ```yaml + on: + pull_request: + branches: [ staging, main ] + ``` -6. **Install Requirements and Run the Pipeline** - - Include the following steps in your workflow: - ```yaml - steps: - - name: Check out repository code - uses: actions/checkout@v3 +5. **Define Job Steps**: + Here’s a simplified version of the job configuration: + ```yaml + jobs: + run-staging-workflow: + runs-on: run-zenml-pipeline + env: + ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} + ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} + ZENML_STACK: stack_name + ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} + ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} + ``` + +6. **Install Requirements and Run Pipeline**: + Include steps to check out code, set up Python, install dependencies, connect to the ZenML server, set the active stack, and run the pipeline: + ```yaml + steps: + - name: Check out repository code + uses: actions/checkout@v3 - - uses: actions/setup-python@v4 - with: - python-version: '3.9' + - uses: actions/setup-python@v4 + with: + python-version: '3.9' - - name: Install requirements - run: pip3 install -r requirements.txt + - name: Install requirements + run: pip3 install -r requirements.txt - - name: Confirm ZenML client is connected - run: zenml status + - name: Confirm ZenML client is connected + run: zenml status - - name: Set stack - run: zenml stack set ${{ env.ZENML_STACK }} + - name: Set stack + run: zenml stack set ${{ env.ZENML_STACK }} - - name: Run pipeline - run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} - ``` + - name: Run pipeline + run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} + ``` -7. **(Optional) Comment Metrics on the Pull Request** - - Configure the workflow to leave a report on the pull request based on the pipeline results. Refer to the template in the ZenML Gitflow repository for implementation. +7. **(Optional) Comment Metrics on PR**: + Configure the workflow to leave a report based on the pipeline results on the pull request. -This setup ensures that only validated code is deployed to production, enhancing the reliability of the CI/CD process for ZenML pipelines. +## Additional Resources +For a practical example, refer to the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/), which provides a template for automating CI/CD with ZenML. ================================================== === File: docs/book/user-guide/production-guide/remote-storage.md === -### Summary: Transitioning to Remote Artifact Storage +### Summary: Transitioning to Remote Artifact Storage in ZenML #### Overview -Transitioning to remote artifact storage enhances collaboration and scalability for production workloads by storing artifacts in the cloud. This allows access from anywhere with appropriate permissions. +ZenML allows users to transition from local artifact storage to remote storage, enhancing collaboration and scalability. Remote storage enables artifact accessibility from anywhere, crucial for team environments and managing larger datasets. #### Connecting Remote Storage -When using remote storage, the only change is that artifacts are stored in a central location. +When using remote storage, artifacts are stored centrally without changing the pipeline execution process. -#### Provisioning and Registering Remote Artifact Stores -ZenML supports various artifact store flavors. Here’s how to set up on major cloud providers: +#### Provisioning Remote Artifact Stores +ZenML supports various artifact store flavors. Below are instructions for major cloud providers: - **AWS (S3)** 1. Install AWS CLI. @@ -729,7 +761,7 @@ ZenML supports various artifact store flavors. Here’s how to set up on major c ```shell zenml integration install s3 -y ``` - 3. Register the S3 Artifact Store: + 3. Register S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` @@ -740,7 +772,7 @@ ZenML supports various artifact store flavors. Here’s how to set up on major c ```shell zenml integration install gcp -y ``` - 3. Register the GCS Artifact Store: + 3. Register GCS Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` @@ -751,16 +783,16 @@ ZenML supports various artifact store flavors. Here’s how to set up on major c ```shell zenml integration install azure -y ``` - 3. Register the Azure Artifact Store: + 3. Register Azure Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` - **Other Providers** - Use cloud-agnostic storage like Minio or create a custom stack component. + Remote artifact stores can be created using cloud-agnostic solutions like Minio or by implementing custom stack components. #### Configuring Permissions with Service Connectors -Service connectors manage credentials for accessing cloud infrastructure. They broker temporary permissions for stack components. +Service connectors manage credentials for accessing cloud infrastructure. They provide temporary permissions to stack components. - **AWS Service Connector** ```shell @@ -777,7 +809,7 @@ Service connectors manage credentials for accessing cloud infrastructure. They b zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` -Connect the service connector to the artifact store: +After creating a service connector, connect it to the artifact store: ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` @@ -796,36 +828,31 @@ zenml artifact-store connect cloud_artifact_store --connector cloud_connector python run.py --training-pipeline ``` -Artifacts will be stored in remote storage, making them accessible to team members. You can list artifact versions: +Artifacts will be stored in the remote location, accessible for future runs and by team members. Users can list artifact versions: ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` -#### Conclusion -Using remote storage significantly enhances the collaborative and scalable aspects of MLOps workflows, allowing artifacts to be shared across the cloud-based ecosystem. +### Conclusion +Transitioning to remote storage in ZenML is a crucial step for building a collaborative MLOps workflow, allowing artifacts to be shared across teams and enhancing scalability. ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === -# Summary of ZenML Stack Management Documentation +# Summary of ZenML Documentation on Switching Infrastructure Backend ## Overview of Stacks -- A **stack** is the configuration of tools and infrastructure for running ZenML pipelines. By default, pipelines run on the `default` stack. -- ZenML acts as a translation layer between the code domain (user's Python code) and the infrastructure domain (stack). - -## Key Concepts -- **Separation of Code and Configuration**: This allows easy switching of environments without altering code, enabling domain experts to work independently. -- **Active Stack**: The stack currently in use for running pipelines can be checked with `zenml stack describe` and listed with `zenml stack list`. +- **Stack**: A configuration of tools and infrastructure for running ZenML pipelines. By default, pipelines run on the `default` stack. +- **Separation of Code and Infrastructure**: ZenML allows users to switch environments without modifying code, enabling domain experts to work independently on code or infrastructure. -## Stack Components -1. **Orchestrator**: Executes pipeline code (default is a local Python thread). - - List orchestrators with `zenml orchestrator list`. - -2. **Artifact Store**: Stores outputs of pipeline steps. - - List artifact stores with `zenml artifact-store list`. +## Stack Management +- **Active Stack**: The stack currently in use for running pipelines. Use `zenml stack describe` to view details of the active stack and `zenml stack list` to see all registered stacks. -3. **Additional Components**: Other components include experiment trackers, model deployers, and container registries. +### Stack Components +1. **Orchestrator**: Executes pipeline code, often as a Python thread. View orchestrators with `zenml orchestrator list`. +2. **Artifact Store**: Persists step outputs, which are not passed in memory. View artifact stores with `zenml artifact-store list`. +3. **Additional Components**: Include experiment trackers, model deployers, and container registries. ## Registering a Stack ### Create an Artifact Store @@ -833,20 +860,21 @@ Using remote storage significantly enhances the collaborative and scalable aspec zenml artifact-store register my_artifact_store --flavor=local ``` - **Command Breakdown**: - - `artifact-store`: Top-level group for stack components. - - `register`: Command to create a new component. - - `my_artifact_store`: Unique name for the artifact store. + - `artifact-store`: Top-level group for artifact stores. + - `register`: Register a new component. + - `my_artifact_store`: Unique name for the store. - `--flavor=local`: Specifies the implementation type. -### Create a Local Stack +### Create a New Stack ```bash zenml stack register a_new_local_stack -o default -a my_artifact_store ``` - **Command Breakdown**: - `stack`: CLI group for stack interactions. - - `register`: Command to create a new stack. - - `-o`: Specifies the orchestrator. - - `-a`: Specifies the artifact store. + - `register`: Register a new stack. + - `a_new_local_stack`: Unique name for the stack. + - `-o` or `--orchestrator`: Specify orchestrator. + - `-a` or `--artifact-store`: Specify artifact store. ## Switching Stacks - Use the ZenML VS Code extension to view and switch stacks easily. @@ -856,12 +884,17 @@ zenml stack register a_new_local_stack -o default -a my_artifact_store ```bash zenml stack set a_new_local_stack ``` -2. Execute the pipeline: +2. Run the pipeline: ```bash python run.py --training-pipeline ``` -This documentation provides essential commands and concepts for managing stacks in ZenML, enabling users to configure and switch their machine learning workflows efficiently. +## Important Commands +- Export stack requirements: `zenml stack export-requirements ` +- Describe a stack: `zenml stack describe ` +- Describe an artifact store: `zenml artifact-store describe my_artifact_store` + +This summary captures the essential aspects of switching the infrastructure backend in ZenML, including stack management, component details, and commands for creating and using stacks. ================================================== @@ -869,156 +902,156 @@ This documentation provides essential commands and concepts for managing stacks ### Summary of ZenML Pipeline Configuration Documentation -#### Overview -This documentation explains how to configure a ZenML pipeline to add compute resources and manage dependencies through a YAML configuration file. +This documentation outlines how to configure a ZenML pipeline to add compute resources and manage dependencies using a YAML configuration file. -#### Configuring the Pipeline -To configure the pipeline, the `run.py` script sets the configuration path and executes the training pipeline: - -```python -pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") -training_pipeline_configured = training_pipeline.with_options(**pipeline_args) -training_pipeline_configured() -``` +#### Key Points: -The YAML configuration file `training_rf.yaml` is essential for defining the pipeline's settings. +1. **Pipeline Configuration**: + - The pipeline is configured using a YAML file (`training_rf.yaml`), which specifies settings for Docker and model parameters. + - The configuration is applied using the `with_options` method in the pipeline script. -#### YAML Configuration Breakdown -1. **Docker Settings**: - ```yaml - settings: - docker: - required_integrations: - - sklearn - requirements: - - pyarrow + ```python + pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") + training_pipeline_configured = training_pipeline.with_options(**pipeline_args) + training_pipeline_configured() ``` - This section specifies required libraries for the Docker container, including `pyarrow` and `scikit-learn`. -2. **Model Association**: +2. **YAML Configuration Breakdown**: + - **Docker Settings**: + ```yaml + settings: + docker: + required_integrations: + - sklearn + requirements: + - pyarrow + ``` + This section specifies required libraries for the Docker image. + + - **Model Association**: + ```yaml + model: + name: breast_cancer_classifier + version: rf + license: Apache 2.0 + description: A breast cancer classifier + tags: ["breast_cancer", "classifier"] + ``` + Defines the model's metadata. + + - **Parameters**: + ```yaml + parameters: + model_type: "rf" # Choose between rf/sgd + ``` + Specifies parameters expected by the pipeline. + +3. **Scaling Compute Resources**: + - To scale resources, add settings for memory and CPU in the YAML file: ```yaml - model: - name: breast_cancer_classifier - version: rf - license: Apache 2.0 - description: A breast cancer classifier - tags: ["breast_cancer", "classifier"] + settings: + orchestrator: + memory: 32 # in GB + steps: + model_trainer: + settings: + orchestrator: + cpus: 8 ``` - This section associates a ZenML model with the pipeline. -3. **Parameters**: + - For Microsoft Azure users using Kubernetes, the configuration differs slightly: ```yaml - parameters: - model_type: "rf" # Choose between rf/sgd + settings: + resources: + memory: "32GB" + steps: + model_trainer: + settings: + resources: + memory: "8GB" + ``` + +4. **Running the Pipeline**: + - Execute the pipeline with: + ```bash + python run.py --training-pipeline ``` - This defines parameters expected by the pipeline, such as `model_type`. -#### Scaling Compute Resources -To adjust resource requirements, add the following to `training_rf.yaml`: +5. **Documentation Links**: + - Additional resources and settings can be found in the ZenML documentation, including details on `ResourceSettings` and GPU attachment. -```yaml -settings: - orchestrator: - memory: 32 # in GB - -steps: - model_trainer: - settings: - orchestrator: - cpus: 8 -``` -This configures the entire pipeline with 32 GB of memory and 8 CPU cores for the model trainer step. - -##### Azure Users -For Azure with Kubernetes, the configuration should be: - -```yaml -settings: - resources: - memory: "32GB" - -steps: - model_trainer: - settings: - resources: - memory: "8GB" -``` - -#### Running the Pipeline -To execute the pipeline with the new configuration, run: - -```python -python run.py --training-pipeline -``` - -#### Important Notes -- Not all orchestrators support `ResourceSettings`. -- For further details on settings and GPU attachment, refer to the ZenML documentation on runtime configuration and GPU training. - -This concise summary captures the essential technical details for configuring and scaling a ZenML pipeline while omitting redundant explanations. +This concise overview captures the essential aspects of configuring a ZenML pipeline, focusing on YAML settings for Docker, model association, parameters, and scaling compute resources. ================================================== === File: docs/book/user-guide/production-guide/deploying-zenml.md === -### Deploying ZenML +### Summary of Deploying ZenML Documentation -Deploying ZenML is essential for moving from local development to production. Initially, ZenML operates locally with an SQLite database for storing metadata (pipelines, models, artifacts). For production, the server must be deployed centrally to facilitate collaboration and interaction among infrastructure components. +**Overview**: Deploying ZenML is essential for transitioning from local development to a production environment, allowing team collaboration and centralized metadata management. -#### Deployment Options +#### Architecture +- **Local Setup**: Initially, ZenML uses an SQLite database to store metadata (pipelines, models, artifacts). +- **Production Setup**: Requires deploying a ZenML server externally for team collaboration. -1. **ZenML Pro Trial**: - - A managed SaaS solution offering one-click deployment. - - To connect to a trial instance, run: +#### Deployment Options +1. **ZenML Pro Trial**: + - Managed SaaS solution with one-click deployment. + - Connect using: ```bash zenml login --pro ``` - - Additional features and a new dashboard are included. Self-hosting is an option post-trial. - -2. **Self-hosting on Cloud Provider**: - - ZenML is open-source and can be self-hosted in a Kubernetes cluster. - - For cluster creation, refer to documentation for [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html), [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli), and [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin). - -#### Connecting to Deployed ZenML - -To connect your local ZenML client to the ZenML Server, use: -```bash -zenml login -``` -This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. You can revert to local mode using: -```bash -zenml logout -``` + - Free trial available [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). + - Offers additional features and a dashboard. + +2. **Self-hosting**: + - Open-source option to deploy ZenML on a Kubernetes cluster. + - Create a cluster using cloud provider documentation: + - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) + - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) + - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) + +#### Connecting to a Deployed ZenML +- Use the CLI to connect your local ZenML client to the server: + ```bash + zenml login + ``` +- This command initiates a browser-based validation process. +- Once connected, all metadata will be centrally tracked. +- To revert to local use, execute: + ```bash + zenml logout + ``` -#### Further Resources +#### Additional Resources +- For more deployment options and guides, visit: + - [Deploying ZenML](../../getting-started/deploying-zenml/README.md) + - Full how-to guides for various deployment methods (Docker, Hugging Face Spaces, Kubernetes). -- [Deploying ZenML](../../getting-started/deploying-zenml/README.md): Overview of deployment options and architecture. -- [Full how-to guides](../../getting-started/deploying-zenml/README.md): Instructions for deploying ZenML on various platforms (Docker, Hugging Face Spaces, Kubernetes). +This summary retains critical technical information and key points while ensuring clarity and conciseness. ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === -### Summary of ZenML Git Integration Documentation - -**Overview**: Connect a Git repository to ZenML to enhance collaboration and optimize Docker builds in MLOps projects. +### Summary of ZenML Git Repository Integration Documentation -#### Benefits of Connecting a Git Repository -- Reduces redundant Docker builds by reusing existing images based on Git commit hashes. -- Facilitates better code management and collaboration among team members. +**Overview**: Connecting a Git repository to ZenML enhances MLOps project collaboration, optimizes Docker builds, and improves code management. -#### Pipeline Execution Flow +**Pipeline Execution Flow**: 1. Trigger a pipeline run locally. -2. ZenML parses the `@pipeline` function for necessary steps. -3. Local client requests stack info from the ZenML server. -4. If using a Git repository, it checks for existing Docker images based on the current commit. +2. ZenML parses the `@pipeline` function. +3. Local client requests stack info from ZenML server. +4. If a Git repository is detected, it checks for existing Docker images based on the Git commit hash. 5. The orchestrator sets up the execution environment in the cloud. -6. Code is downloaded from the Git repository, and the existing Docker image is used. -7. Pipeline steps execute, storing artifacts in the cloud. -8. Execution status and metadata are reported back to the ZenML server. +6. Code is downloaded from the Git repository, using the existing Docker image. +7. Pipeline steps execute, storing artifacts in a cloud-based store. +8. Run status and metadata are reported back to the ZenML server. -#### Creating a GitHub Repository -1. Sign in to GitHub. +**Benefits**: Avoids redundant builds, enhances team collaboration, and ensures correct code versions are used for each run. + +### Creating a GitHub Repository +1. Sign in to [GitHub](https://github.com/). 2. Click "+" and select "New repository." 3. Name the repository, set visibility, and optionally add a README or .gitignore. 4. Click "Create repository." @@ -1031,13 +1064,14 @@ git commit -m "Initial commit" git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` +*Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` accordingly.* -#### Linking ZenML to GitHub -1. Obtain a GitHub Personal Access Token (PAT): +### Linking GitHub to ZenML +1. **Get a GitHub Personal Access Token (PAT)**: - Go to GitHub settings > Developer settings > Personal access tokens. - Generate a new token with `contents` read-only access for the specific repository. - -2. Install GitHub integration and register the repository: + +2. **Install GitHub Integration and Register Repository**: ```sh zenml integration install github zenml code-repository register --type=github \ @@ -1045,28 +1079,28 @@ zenml code-repository register --type=github \ --owner=YOUR_USERNAME --repository=YOUR_REPOSITORY_NAME \ --token=YOUR_GITHUB_PERSONAL_ACCESS_TOKEN ``` +*Fill in ``, `YOUR_USERNAME`, `YOUR_REPOSITORY_NAME`, and `YOUR_GITHUB_PERSONAL_ACCESS_TOKEN`.* -#### Running the Pipeline -- First run builds the Docker image: +### Running the Training Pipeline ```python +# First run builds the Docker image python run.py --training-pipeline -``` -- Subsequent runs skip Docker building: -```python + +# Subsequent runs skip Docker building python run.py --training-pipeline ``` -For further details, refer to the ZenML Git Integration documentation. +For more details, refer to the [ZenML Git Integration documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/production-guide/end-to-end.md === -### End-to-End MLOps Project with ZenML +# End-to-End MLOps Project with ZenML -This documentation outlines the steps to create an end-to-end MLOps project using ZenML, integrating advanced MLOps concepts. +This documentation outlines the steps to create an end-to-end MLOps project using ZenML, integrating various advanced concepts. -#### Key Concepts Covered: +## Key Concepts Covered - Deploying ZenML - Abstracting infrastructure with stacks - Connecting remote storage @@ -1074,14 +1108,14 @@ This documentation outlines the steps to create an end-to-end MLOps project usin - Configuring scalable pipelines - Connecting a Git repository -#### Getting Started -1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: +## Getting Started +1. **Set up a virtual environment** and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` -2. **Initialize Project**: Use ZenML templates to set up the project: +2. **Create a project using ZenML templates**: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e @@ -1089,7 +1123,7 @@ This documentation outlines the steps to create an end-to-end MLOps project usin pip install -r requirements.txt ``` - **Alternative Method**: Clone the ZenML example if the above doesn't work: + **Alternative**: Clone the e2e template from ZenML examples: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/e2e @@ -1097,41 +1131,40 @@ This documentation outlines the steps to create an end-to-end MLOps project usin zenml init ``` -#### Learning Outcomes +## Learning Outcomes The e2e project template demonstrates core ZenML concepts for supervised ML with batch predictions, building on the starter project. Users are encouraged to run pipelines on a remote cloud stack and a tracked Git repository to reinforce learned concepts. -#### Conclusion -This guide prepares you to create an end-to-end MLOps project with ZenML, connected to cloud infrastructure. For further learning on advanced topics, refer to the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). Good luck with your MLOps endeavors! +## Conclusion +This guide equips you with the knowledge to implement an end-to-end MLOps project using ZenML. For further learning, explore the advanced concepts in the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). Good luck with your MLOps journey! ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === -### Summary of Cloud Orchestration Documentation +# Orchestrate on the Cloud with ZenML -#### Overview -This documentation outlines how to transition MLOps pipelines from local execution to a cloud environment, utilizing cloud resources for scalability and robustness. Key components include: +This documentation covers transitioning MLOps pipelines from local execution to the cloud using ZenML, focusing on two key components: the orchestrator and the container registry. +## Key Components - **Orchestrator**: Manages workflow and execution of pipelines. - **Container Registry**: Stores Docker container images. -- **Remote Storage**: Completes the cloud stack for running pipelines. - -#### Cloud Stack Components -1. **Skypilot Orchestrator**: A simple option that provisions a VM on a public cloud to execute pipelines. -2. **Docker**: Used for packaging code into images that include all dependencies for pipeline execution. - -#### Pipeline Execution Sequence -1. User initiates a pipeline via `run.py`. -2. Client retrieves stack configuration from the server. -3. Client builds and pushes a Docker image to the container registry. -4. Client creates a run in the orchestrator, provisioning a VM. -5. Orchestrator pulls the Docker image from the registry. -6. Artifacts are stored in the artifact store (cloud storage). -7. Status updates are sent back to the ZenML server. -#### Setting Up Cloud Resources +These components, along with remote storage, form a basic cloud stack for running pipelines. + +## Basic Cloud Stack Setup +The recommended starting orchestrator is **Skypilot**, which provisions a VM on a public cloud. ZenML utilizes **Docker** to package code and dependencies into images that are pushed to the container registry. -##### AWS Setup +### Sequence of Events When Running a Pipeline +1. User runs a pipeline on the client machine, executing `run.py`. +2. The client retrieves stack info from the server. +3. The client builds and pushes an image to the container registry. +4. The client creates a run in the orchestrator, provisioning a VM. +5. The orchestrator pulls the image from the container registry. +6. Artifacts are stored in the artifact store (cloud storage). +7. The pipeline reports status back to the ZenML server. + +## Provisioning and Registering Components +### AWS Setup 1. Install integrations: ```shell zenml integration install aws skypilot_aws -y @@ -1151,14 +1184,14 @@ This documentation outlines how to transition MLOps pipelines from local executi zenml container-registry connect cloud_container_registry --connector cloud_connector ``` -##### GCP Setup +### GCP Setup 1. Install integrations: ```shell zenml integration install gcp skypilot_gcp -y ``` 2. Register service connector: ```shell - zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False + zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= ``` 3. Register orchestrator: ```shell @@ -1171,7 +1204,8 @@ This documentation outlines how to transition MLOps pipelines from local executi zenml container-registry connect cloud_container_registry --connector cloud_connector ``` -##### Azure Setup +### Azure Setup +Due to compatibility issues, Azure users should use the Kubernetes orchestrator: 1. Install integrations: ```shell zenml integration install azure kubernetes -y @@ -1180,7 +1214,7 @@ This documentation outlines how to transition MLOps pipelines from local executi ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` -3. Register Kubernetes orchestrator: +3. Register orchestrator: ```shell zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector @@ -1191,23 +1225,22 @@ This documentation outlines how to transition MLOps pipelines from local executi zenml container-registry connect cloud_container_registry --connector cloud_connector ``` -#### Running a Pipeline -1. Register a new stack: - ```shell - zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry - ``` -2. Set the stack active: - ```shell - zenml stack set minimal_cloud_stack - ``` -3. Execute the training pipeline: - ```shell - python run.py --training-pipeline - ``` - -Upon execution, the pipeline builds a Docker image, pushes it, and runs on a cloud VM, streaming logs back to the user. +## Running a Pipeline +After registering components, register a new stack: +```shell +zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry +``` +Set the stack active: +```shell +zenml stack set minimal_cloud_stack +``` +Run the training pipeline: +```shell +python run.py --training-pipeline +``` +The pipeline will build a Docker image, push it, and execute on the cloud VM, streaming logs back to the client. -For further exploration, refer to the [Component Guide](../../component-guide/README.md) for various stack components integrated with ZenML. +For further exploration, refer to the [Component Guide](../../component-guide/README.md) for various integrated components. ================================================== @@ -1215,21 +1248,23 @@ For further exploration, refer to the [Component Guide](../../component-guide/RE # Production Guide Summary -The ZenML production guide is designed for MLOps Engineers looking to implement MLOps in a workplace setting, building on the concepts from the Starter Guide. It focuses on transitioning from local pipeline execution to cloud-based production environments. +The ZenML production guide is an advanced resource for MLOps Engineers, building on the Starter guide. It is designed for ML practitioners looking to implement proof of concepts in their workplaces. -## Key Topics Covered: -- **Deploying ZenML**: Instructions for setting up ZenML in a production environment. -- **Understanding Stacks**: Overview of the components and configurations of ZenML stacks. -- **Connecting Remote Storage**: Guidance on integrating cloud storage solutions. -- **Orchestrating on the Cloud**: Techniques for managing workflows in cloud environments. -- **Configuring the Pipeline for Scalability**: Strategies to ensure pipelines can handle increased workloads. -- **Code Repository Configuration**: Steps to connect and manage code repositories effectively. +## Key Focus Areas: +- Transitioning from local pipeline execution to cloud production. +- Topics covered include: + - **Deploying ZenML**: Instructions for setting up ZenML in a production environment. + - **Understanding Stacks**: Overview of ZenML stacks and their components. + - **Connecting Remote Storage**: Guidelines for integrating cloud storage solutions. + - **Orchestrating on the Cloud**: Best practices for managing cloud-based orchestration. + - **Configuring the Pipeline for Scalability**: Techniques for scaling compute resources. + - **Code Repository Configuration**: Steps to connect a code repository for version control. ## Prerequisites: -- A prepared Python environment with `virtualenv` installed. -- Selection and setup of a cloud provider (AWS, GCP, Azure) with the necessary CLI tools authorized. +- A Python environment with `virtualenv` installed. +- A major cloud provider (AWS, GCP, Azure) selected, with respective CLIs installed and authorized. -By following this guide, users will complete an end-to-end MLOps project, serving as a practical reference for future implementations. +By following this guide, users will complete an end-to-end MLOps project, serving as a model for future implementations. ================================================== @@ -1237,35 +1272,24 @@ By following this guide, users will complete an end-to-end MLOps project, servin # ZenML LLMOps Guide Summary -The ZenML LLMOps Guide provides a comprehensive framework for integrating Large Language Models (LLMs) into MLOps workflows. It is intended for ML practitioners and MLOps engineers aiming to leverage LLMs while ensuring robust and scalable pipelines. - -## Key Topics Covered: -- **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG) and its implementation. -- **Data Handling**: - - Data ingestion and preprocessing. - - Generating embeddings and storing them in a vector database. -- **Inference and Evaluation**: - - Basic RAG inference pipeline. - - Evaluation metrics for retrieval and generation. - - Reranking techniques for improved retrieval. -- **Embeddings and Finetuning**: - - Finetuning embeddings and LLMs, including using Sentence Transformers. - - Synthetic data generation for training. - - Deployment of finetuned models. - -## Implementation Example: -The guide includes a practical application of a question-answering system using RAG, demonstrating the transition from a simple pipeline to a more complex setup involving finetuning and reranking. +The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows, aimed at ML practitioners and MLOps engineers. Key topics include: -## Prerequisites: -- A Python environment with ZenML installed. -- Familiarity with concepts from the Starter and Production Guides. +- **RAG with ZenML**: Understanding and implementing Retrieval-Augmented Generation (RAG). +- **Data Handling**: Ingestion, preprocessing, and generating embeddings. +- **Vector Database**: Storing embeddings effectively. +- **Inference Pipeline**: Building a basic RAG inference pipeline. +- **Evaluation**: Metrics for retrieval and generation, including practical evaluation methods. +- **Reranking**: Techniques for improving retrieval results and evaluating reranking performance. +- **Finetuning**: Strategies for finetuning embeddings and LLMs, including synthetic data generation and using Sentence Transformers. +- **Deployment**: Steps for deploying finetuned models. -By the end of the guide, users will understand how to effectively utilize LLMs in MLOps workflows with ZenML, enabling the development of scalable LLM-powered applications. +The guide emphasizes a practical application—a question answering system for ZenML—demonstrating the transition from a simple RAG pipeline to advanced techniques like embedding finetuning and document reranking. -### Visuals: -The guide includes diagrams illustrating the simplified development and deployment of LLM-powered MLOps pipelines. +### Prerequisites +- Python environment with ZenML installed. +- Familiarity with the concepts in the Starter and Production Guides. -For detailed implementations and examples, refer to the specific sections linked within the guide. +By the end of the guide, users will understand how to effectively leverage LLMs in MLOps workflows, enabling the creation of scalable and maintainable applications. ================================================== @@ -1273,29 +1297,25 @@ For detailed implementations and examples, refer to the specific sections linked ### Summary: Finetuning Embeddings with Sentence Transformers -This documentation outlines the process for finetuning embeddings using the Sentence Transformers library. The pipeline involves loading a dataset, finetuning the model, evaluating the results, and visualizing them. +This documentation outlines the process of finetuning embeddings using the Sentence Transformers library within a ZenML pipeline. #### Key Steps in the Pipeline: - -1. **Data Loading**: +1. **Data Loading**: - Load data from Hugging Face or Argilla by using the `--argilla` flag: ```bash python run.py --embeddings --argilla ``` 2. **Finetuning Process**: - - **Model Loading**: Load the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) using Sentence Transformers with SDPA for efficient training. - - **Loss Function**: Use a custom `MatryoshkaLoss`, which wraps `MultipleNegativesRankingLoss`, allowing simultaneous training across different embedding dimensions. - - **Dataset Preparation**: Load the training dataset from a specified path and save it as a temporary JSON file. - - **Evaluator**: Create an evaluator with `get_evaluator()` to assess model performance during training. + - **Model Loading**: Load the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) using Sentence Transformers with efficient training via Flash Attention 2. + - **Loss Function**: Use `MatryoshkaLoss`, a wrapper around `MultipleNegativesRankingLoss`, allowing simultaneous training on different embedding dimensions. + - **Dataset Preparation**: Load the training dataset from a specified path using Hugging Face's `load_dataset` function. + - **Evaluator**: Create an evaluator with `get_evaluator` to assess model performance during training. - **Training Arguments**: Set hyperparameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. - **Trainer Initialization**: Initialize `SentenceTransformerTrainer` with the model, training arguments, dataset, and loss function, then call `trainer.train()` to start training. - - **Model Saving**: After training, save the finetuned model to the Hugging Face Hub: - ```python - trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) - ``` - - **Metadata Logging**: Log training parameters and hardware information. - - **Model Rehydration**: Save the model to a temporary file, reload it into a new instance to handle materialization errors. + - **Model Saving**: Save the finetuned model to Hugging Face Hub with `trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED)`. + - **Metadata Logging**: Log training metadata including parameters and hardware details. + - **Model Rehydration**: Save and reload the trained model to handle materialization errors. #### Simplified Code Snippet: ```python @@ -1315,7 +1335,9 @@ trainer.train() trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` -The finetuning process enhances model performance across various embedding sizes and ensures the model is versioned and tracked within ZenML for observability. The pipeline concludes with an evaluation of the base and finetuned embeddings, followed by result visualization. +The finetuning process enhances model performance across various embedding sizes and ensures the model is versioned and tracked within ZenML for observability. After training, the pipeline evaluates and visualizes the results of both base and finetuned embeddings. + +For further details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== @@ -1323,17 +1345,27 @@ The finetuning process enhances model performance across various embedding sizes ### Summary of Synthetic Data Generation with Distilabel -This documentation outlines the process of generating synthetic data using the `distilabel` library to fine-tune embeddings based on a pre-existing dataset of technical documentation. The dataset can be found [here](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0). +This documentation outlines the process of generating synthetic data using the `distilabel` library to fine-tune embeddings for a dataset of technical documentation. It leverages a previously created dataset from Hugging Face and employs LLMs to automate question generation for each content chunk. + +#### Key Components: + +1. **Dataset Overview**: + - The dataset consists of `page_content` and source URLs. + - The goal is to pair `page_content` with generated questions. -#### Pipeline Overview -1. Load the Hugging Face dataset. -2. Use `distilabel` to generate synthetic data. -3. Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. +2. **Pipeline Overview**: + - Load the Hugging Face dataset. + - Use `distilabel` to generate synthetic data. + - Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. -#### Synthetic Data Generation -`distilabel` allows for scalable knowledge distillation from LLMs, generating synthetic data or providing AI feedback. In this case, we will generate queries for documentation chunks using the `gpt-4o` model. +3. **Synthetic Data Generation**: + - `distilabel` allows scalable knowledge distillation from LLMs. + - The pipeline setup includes: + - Loading data from Hugging Face. + - Generating sentence pairs (queries) using `GenerateSentencePair`. + - The LLM used is `gpt-4o`, but other models can be utilized. -**Key Code Components:** +#### Code Snippet for Synthetic Query Generation: ```python import os from typing import Annotated, Tuple @@ -1344,10 +1376,12 @@ from distilabel.steps import LoadDataFromHub from distilabel.steps.tasks import GenerateSentencePair from zenml import step +synthetic_generation_context = "The text is a chunk from technical documentation of ZenML." + @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) - + with distilabel.pipeline.Pipeline(name="generate_embedding_queries") as pipeline: load_dataset = LoadDataFromHub(output_mappings={"page_content": "anchor"}) generate_sentence_pair = GenerateSentencePair(triplet=True, action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context) @@ -1358,15 +1392,14 @@ def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` -- The pipeline loads the dataset, maps `page_content` to `anchor`, and generates queries for each chunk, including both positive and negative queries. -#### Data Annotation with Argilla -After generating synthetic data, it is pushed to Argilla for inspection. Additional metadata is added for easier navigation: -- `parent_section`: Documentation section of the chunk. -- `token_count`: Number of tokens in the chunk. -- Similarity metrics between queries. +4. **Data Annotation with Argilla**: + - After generating synthetic data, it is pushed to Argilla for inspection. + - Additional metadata includes: + - `parent_section`, `token_count`, and cosine similarities between query types. + - The embeddings for the anchor column are generated using a specified model. -**Key Code for Formatting Data:** +#### Code Snippet for Formatting Data: ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") @@ -1375,6 +1408,7 @@ def format_data(batch): return [vector.tolist() for vector in model.encode(batch_column)] batch["anchor-vector"] = get_embeddings(batch["anchor"]) + batch["question-vector"] = get_embeddings(batch["anchor"]) batch["positive-vector"] = get_embeddings(batch["positive"]) batch["negative-vector"] = get_embeddings(batch["negative"]) @@ -1386,10 +1420,11 @@ def format_data(batch): batch["similarity-anchor-negative"] = get_similarities(batch["anchor-vector"], batch["negative-vector"]) return batch ``` -- This function computes embeddings and similarity metrics for the generated queries. -#### Next Steps -After data inspection and potential cleaning in Argilla, the next phase involves fine-tuning the embeddings. The code can be executed without prior annotation, assuming the generated data quality is adequate. +5. **Next Steps**: + - After data inspection and potential cleaning, the focus will shift to fine-tuning the embeddings using the generated dataset, assuming quality is adequate. + +This summary encapsulates the essential steps and code snippets for generating synthetic data with `distilabel`, ensuring that critical information is retained for understanding and implementation. ================================================== @@ -1397,80 +1432,88 @@ After data inspection and potential cleaning in Argilla, the next phase involves ### Summary of Documentation on Evaluating Finetuned Embeddings -This documentation outlines the process of evaluating finetuned embeddings and comparing them to base embeddings using the MatryoshkaLoss function. The evaluation steps are straightforward, as illustrated in the provided code. - -#### Key Code Snippet for Base Model Evaluation: -```python -from zenml import log_model_metadata, step - -def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: - evaluator = get_evaluator(dataset=dataset, model=model) - return evaluator(model) +This documentation outlines the process of evaluating finetuned embeddings and comparing them to original base embeddings using ZenML. The evaluation utilizes the same MatryoshkaLoss function and involves the following key steps: -@step -def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: - model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") - results = evaluate_model(dataset=dataset, model=model) +1. **Model Evaluation Function**: + - The `evaluate_model` function takes a dataset and a model, returning evaluation results as a dictionary of metrics. + - The `evaluate_base_model` function initializes the base model using `SentenceTransformer`, evaluates it on the dataset, and logs the results as model metadata. - base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} - log_model_metadata(metadata={"base_model_eval": base_model_eval}) + ```python + from zenml import log_model_metadata, step - return results -``` + def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: + evaluator = get_evaluator(dataset=dataset, model=model) + return evaluator(model) -#### Evaluation Results: -- Results are logged as model metadata in ZenML, allowing inspection via the Model Control Plane. -- The evaluation output is a dictionary of string keys and float values, versioned and tracked in the artifact store. + @step + def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: + model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") + results = evaluate_model(dataset=dataset, model=model) + base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} + log_model_metadata(metadata={"base_model_eval": base_model_eval}) + return results + ``` -#### Visualization: -- Results can be visualized using `PIL.Image` and `matplotlib` to compare base and finetuned model evaluations, represented as percentage values. +2. **Logging and Versioning**: + - Evaluation results are logged in ZenML and versioned for tracking. The results can be inspected in the Model Control Plane. -#### Insights: -- Finetuned embeddings improved recall across all dimensions, but further data refinement is needed for better performance. -- The finetuning used synthetic data from `distilabel` and `gpt-4o`, which may limit immediate improvements. +3. **Visualization**: + - Results can be visualized using `matplotlib`, allowing for easy comparison between base and finetuned model evaluations. The visualization shows improvements in recall across dimensions. -#### Model Control Plane: -- The Model Control Plane provides a unified interface to inspect artifacts, models, logged metadata, and associated pipeline runs. -- It allows users to compare evaluation values and inspect training parameters. +4. **Model Control Plane**: + - The Model Control Plane serves as a unified interface to inspect artifacts, models, metadata, and pipeline runs. It provides insights into the latest versions and evaluation metrics. -#### Next Steps: -- After evaluating embeddings, they can be integrated into the original RAG pipeline to regenerate embeddings and rerun evaluations. -- Future sections will cover LLM finetuning and deployment, with resources for starting LLM finetuning with ZenML. +5. **Next Steps**: + - After evaluating the embeddings, users can integrate them into the original RAG pipeline and perform further evaluations. The documentation also references upcoming sections on LLM finetuning and deployment, with links to relevant projects and guides. -For further exploration, refer to the provided links for detailed guides and project repositories. +This concise overview captures the essential technical details and processes involved in evaluating finetuned embeddings using ZenML, ensuring that critical information is retained for further exploration or implementation. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === -**Summary: Finetuning Embeddings on Custom Synthetic Data** +### Summary of Documentation on Finetuning Embeddings -This documentation outlines the process of finetuning embeddings on synthetic data to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Initially, off-the-shelf embeddings are used, which serve as a baseline. To improve performance, embeddings should be finetuned on domain-specific data, particularly technical documentation. +**Objective**: Enhance retrieval performance by finetuning embeddings on custom synthetic data. -**Key Steps:** -1. **Generate Synthetic Data**: Utilize `distilabel` for synthetic data generation. -2. **Finetune Embeddings**: Use Sentence Transformers for embedding finetuning. -3. **Evaluate Embeddings**: Assess the finetuned embeddings and leverage ZenML's model control plane for systematic evaluation. +**Context**: This documentation is part of an older version of ZenML. For the latest version, refer to [ZenML documentation](https://docs.zenml.io). -**Libraries Used:** -- **Distilabel**: Generates synthetic data and provides AI feedback using LLMs. -- **Argilla**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. +**Overview**: The guide focuses on optimizing embedding models using synthetic data generation and human feedback. While off-the-shelf embeddings provide a baseline, finetuning on domain-specific data can significantly improve performance in retrieval-augmented generation (RAG) pipelines. -Both libraries can be used independently but are more effective when combined. The entire process can be implemented via ZenML pipelines, and detailed instructions are available in the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). The finetuning process can be executed locally or on cloud compute. +**RAG Pipeline**: The process involves retrieving relevant documents from a vector database and generating responses using a language model. Finetuning embeddings on a dataset of technical documentation enhances the retrieval step and overall pipeline performance. -================================================== +**Steps Involved**: +1. **Generate Synthetic Data**: Use `distilabel` for synthetic data generation. +2. **Finetune Embeddings**: Utilize Sentence Transformers for embedding finetuning. +3. **Evaluate Finetuned Embeddings**: Leverage ZenML's model control plane for systematic evaluation. -=== File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === +**Libraries Used**: +- **`distilabel`**: Generates synthetic data and provides AI feedback, focusing on scalable knowledge distillation from LLMs. +- **`argilla`**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. + +Both libraries can function independently but are more effective when used together within ZenML pipelines. + +**Code and Resources**: For practical implementation, refer to the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) for complete code examples. The finetuning process can be executed locally or on cloud compute. -### Summary: Finetuning LLMs +**Note**: This section is designed to provide a comprehensive understanding of the finetuning process while maintaining technical accuracy and clarity. -This documentation outlines the process of finetuning Large Language Models (LLMs) for specific tasks or to enhance performance and cost-effectiveness. Key points include: +================================================== + +=== File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === -- **Purpose of Finetuning**: While APIs like OpenAI and Anthropic are commonly used, finetuning an LLM on custom data can improve response generation, understanding of domain-specific terminology, prompt length, adherence to specific patterns, and latency optimization. +### Summary of LLM Finetuning Documentation -- **Benefits of Finetuning**: It can enhance the model's ability to generate structured responses, better comprehend specialized content, and reduce the context window needed for effective performance. +**Overview**: This documentation focuses on finetuning Large Language Models (LLMs) for specific tasks or to enhance performance and cost-effectiveness. It is part of the ZenML framework and discusses scenarios where finetuning is beneficial, especially in conjunction with Retrieval-Augmented Generation (RAG) systems. -- **Guide Structure**: The guide covers the following topics: +**Key Points**: +- **Purpose of Finetuning**: + - Improve response generation in specific formats. + - Enhance understanding of domain-specific terminology. + - Reduce prompt length for consistent outputs. + - Follow specific patterns or protocols efficiently. + - Optimize latency by minimizing context window size. + +- **Guide Structure**: The guide includes the following sections: - [Finetuning in 100 lines of code](finetuning-100-loc.md) - [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) - [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) @@ -1479,252 +1522,241 @@ This documentation outlines the process of finetuning Large Language Models (LLM - [Deploying finetuned models](deploying-finetuned-models.md) - [Next steps](next-steps.md) -- **Implementation Guidance**: The steps for finetuning are straightforward, but understanding the need for finetuning, evaluating performance, and selecting appropriate data are crucial. +- **Finetuning Process**: The steps to finetune an LLM are straightforward. Understanding when to finetune, evaluating performance, and selecting appropriate data are crucial. -- **Example Repository**: For practical implementation, refer to the [llm-lora-finetuning repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning), which contains the complete code that can be executed locally (with a GPU) or on cloud platforms. +- **Example Repository**: For practical implementation, refer to the [`llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning), which contains the full code. This code can be executed locally (with a GPU) or on cloud platforms. -This guide emphasizes the importance of strategic decisions in the finetuning process rather than focusing on a specific use case. +**Note**: This documentation is an older version; for the latest updates, visit the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md === -### Summary: Fine-tuning an LLM in 100 Lines of Code +### Summary: Fine-tuning an LLM with ZenML -This documentation outlines a concise implementation of a fine-tuning pipeline for a language model (LLM) using the TinyLlama model (1.1B parameters). The example demonstrates loading the model, preparing a dataset, fine-tuning, and generating responses. +This documentation outlines a concise implementation of a fine-tuning pipeline for a language model (LLM) in 100 lines of code, specifically using the TinyLlama model. Key components include: #### Key Steps: +1. **Dataset Preparation**: A small instruction-tuning dataset is created with input-output pairs: + - Instructions and corresponding responses about "ZenML World" entities. -1. **Installation**: Required packages can be installed via: - ```bash - pip install datasets transformers torch accelerate>=0.26.0 - ``` +2. **Data Formatting and Tokenization**: + - Each example is formatted into a structured prompt: + ``` + ### Instruction: [user query] + ### Response: [desired response] + ``` + - Tokenization is performed with a maximum length of 128 tokens. -2. **Dataset Preparation**: A small instruction-tuning dataset is created with clear input-output pairs. - ```python - def prepare_dataset() -> Dataset: - data = [ - {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity..."}, - {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures..."}, - {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees..."} - ] - return Dataset.from_list(data) - ``` +3. **Model Selection**: + - The base model used is `TinyLlama/TinyLlama-1.1B-Chat-v1.0`, chosen for its small size and pre-training for chat tasks. -3. **Tokenization**: The dataset is formatted and tokenized for model training. - ```python - def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: - formatted_text = f"### Instruction: {example['instruction']}\n### Response: {example['response']}" - return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) - ``` +4. **Training Configuration**: + - Training parameters include: + - 3 epochs + - Batch size of 1 with gradient accumulation of 4 + - Learning rate of 2e-4 + - Mixed precision (bfloat16) + - Logging every 10 steps -4. **Model Fine-tuning**: The model is fine-tuned with specified training parameters. - ```python - def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0") -> Tuple[AutoModelForCausalLM, AutoTokenizer]: - tokenizer = AutoTokenizer.from_pretrained(base_model) - model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16, device_map="auto") - dataset = prepare_dataset() - tokenized_dataset = dataset.map(lambda x: tokenize_data(x, tokenizer), remove_columns=dataset.column_names) - - training_args = TrainingArguments( - output_dir="./zenml-world-model", - num_train_epochs=3, - per_device_train_batch_size=1, - gradient_accumulation_steps=4, - learning_rate=2e-4, - bf16=True, - logging_steps=10, - save_total_limit=2, - ) +5. **Response Generation**: + - The fine-tuned model generates responses using a temperature of 0.7 and a maximum length of 128 tokens. - trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)) - trainer.train() - return model, tokenizer - ``` +#### Code Snippet: +```python +import os +from typing import List, Dict +from datasets import Dataset +from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer, DataCollatorForLanguageModeling +import torch -5. **Response Generation**: The fine-tuned model generates responses based on new prompts. - ```python - def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer, max_length: int = 128) -> str: - inputs = tokenizer(f"### Instruction: {prompt}\n### Response:", return_tensors="pt").to(model.device) - outputs = model.generate(**inputs, max_length=max_length, temperature=0.7, num_return_sequences=1) - return tokenizer.decode(outputs[0], skip_special_tokens=True) - ``` +def prepare_dataset() -> Dataset: + data = [ + {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity..."}, + {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures..."}, + {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees..."} + ] + return Dataset.from_list(data) + +def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: + formatted_text = f"### Instruction: {example['instruction']}\n### Response: {example['response']}" + return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) + +def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"): + tokenizer = AutoTokenizer.from_pretrained(base_model) + tokenizer.pad_token = tokenizer.eos_token + model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16, device_map="auto") + + dataset = prepare_dataset() + tokenized_dataset = dataset.map(lambda x: tokenize_data(x, tokenizer), remove_columns=dataset.column_names) + + training_args = TrainingArguments( + output_dir="./zenml-world-model", num_train_epochs=3, per_device_train_batch_size=1, + gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=10, save_total_limit=2 + ) + + trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)) + trainer.train() + + return model, tokenizer + +def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer) -> str: + inputs = tokenizer(f"### Instruction: {prompt}\n### Response:", return_tensors="pt").to(model.device) + outputs = model.generate(**inputs, max_length=128, temperature=0.7) + return tokenizer.decode(outputs[0], skip_special_tokens=True) -6. **Testing the Model**: The model is tested with various prompts to demonstrate its response capabilities. +if __name__ == "__main__": + model, tokenizer = fine_tune_model() + test_prompts = ["What is a Zenbot?", "Describe the Cosmic Butterflies.", "Tell me about an unknown creature."] + for prompt in test_prompts: + print(f"\nPrompt: {prompt}\nResponse: {generate_response(prompt, model, tokenizer)}") +``` #### Limitations: -- The dataset is minimal; real tasks require larger datasets. -- Larger models may yield better results but need more resources. -- The training configuration is simplified for demonstration purposes. -- Evaluation metrics and validation data are necessary for production systems. +- The dataset is small, which may lead to poor response quality. +- Larger models could yield better results but require more resources. +- Minimal training epochs and simple learning rates are used for demonstration. #### Next Steps: -The guide will cover more advanced topics, including: -- Larger models and datasets -- Evaluation metrics -- Parameter-efficient fine-tuning techniques -- Experiment tracking and model management -- Deployment of fine-tuned models - -This implementation serves as a foundational example for understanding LLM fine-tuning. +The documentation suggests exploring more robust fine-tuning techniques, including larger datasets, evaluation metrics, and model deployment strategies. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune-llms.md === -### Summary: When to Finetune LLMs - -This guide provides a practical overview for finetuning large language models (LLMs) on custom data. Key points include: +# Finetuning LLMs: When and Why -- **Not a Universal Solution**: Finetuning may not solve every problem and can introduce technical debt. It should be the last resort after exploring other options. -- **Diverse Use Cases**: LLMs can be applied beyond chatbot interfaces, often with lower failure rates in non-chatbot scenarios. - -### When to Consider Finetuning +## Overview +This guide provides a practical overview of finetuning large language models (LLMs) on custom data. It emphasizes that finetuning is not a universal solution and may introduce technical debt. Alternative uses for LLMs beyond chatbots are highlighted, and finetuning should be considered after exploring other approaches. -Finetuning is beneficial in the following scenarios: +## When to Finetune an LLM +Finetuning is beneficial in specific scenarios: -1. **Domain-Specific Knowledge**: Necessary for deep understanding in specialized fields (e.g., medical, legal). RAG is often better for novel domains. -2. **Consistent Style/Format**: Required for specific output formats, such as code generation. -3. **Improved Task Accuracy**: Needed for critical application tasks. -4. **Handling Proprietary Information**: Essential for confidential data that cannot be sent to external APIs. -5. **Custom Instructions/Prompts**: Repeated prompts can be integrated into the model, saving latency and costs. -6. **Improved Efficiency**: Finetuning may enhance performance with shorter prompts. +1. **Domain-Specific Knowledge**: Necessary for deep understanding in specialized fields (e.g., medical, legal). +2. **Consistent Style/Format**: Required for outputs in specific styles, such as code generation. +3. **Improved Task Accuracy**: Needed for tasks critical to your application. +4. **Handling Proprietary Information**: Essential for confidential data that cannot be sent externally. +5. **Custom Instructions**: Repeated prompts can be integrated into the model to save on latency and costs. +6. **Improved Efficiency**: Can enhance performance with shorter prompts. ### Decision Flowchart - ```mermaid flowchart TD - A[Should I finetune an LLM?] --> B{Is prompt engineering
sufficient?} + A[Should I finetune an LLM?] --> B{Is prompt engineering sufficient?} B -->|Yes| C[Use prompt engineering] - B -->|No| D{Is it a knowledge retrieval
problem?} + B -->|No| D{Is it primarily a knowledge retrieval problem?} - D -->|Yes| E{Is real-time data
access needed?} + D -->|Yes| E{Is real-time data access needed?} E -->|Yes| F[Use RAG] - E -->|No| G{Is data volume
large?} - G -->|Yes| H[Consider hybrid:
RAG + Finetuning] + E -->|No| G{Is data volume very large?} + G -->|Yes| H[Consider hybrid: RAG + Finetuning] G -->|No| F - D -->|No| I{Is it a narrow,
specific task?} - I -->|Yes| J{Can a smaller
model handle it?} + D -->|No| I{Is it a narrow, specific task?} + I -->|Yes| J{Can a smaller specialized model handle it?} J -->|Yes| K[Use smaller model] J -->|No| L[Consider finetuning] - I -->|No| M{Do you need
consistent style?
or format?} + I -->|No| M{Do you need consistent style or format?} M -->|Yes| L - M -->|No| N{Is deep domain
expertise required?} + M -->|No| N{Is deep domain expertise required?} - N -->|Yes| O{Is the domain
well-represented?} + N -->|Yes| O{Is the domain well-represented in base model?} O -->|Yes| P[Use base model] O -->|No| L - N -->|No| Q{Is data
proprietary?} - Q -->|Yes| R{Can you use
API solutions?} + N -->|No| Q{Is data proprietary/sensitive?} + Q -->|Yes| R{Can you use API solutions?} R -->|Yes| S[Use API solutions] R -->|No| L Q -->|No| S ``` -### Alternatives to Finetuning - -Before finetuning, consider: - -- **Prompt Engineering**: Often effective without finetuning. -- **Retrieval-Augmented Generation (RAG)**: More effective for specific knowledge bases. +## Alternatives to Finetuning +Before opting for finetuning, consider: +- **Prompt Engineering**: Often sufficient for good results. +- **Retrieval-Augmented Generation (RAG)**: Effective for specific knowledge bases. - **Smaller Task-Specific Models**: May outperform finetuned LLMs for narrow tasks. -- **API-Based Solutions**: Simpler and cost-effective for non-sensitive data. +- **API-Based Solutions**: Simpler and cost-effective if sensitive data handling is unnecessary. -Finetuning can be powerful but should be approached cautiously, starting with simpler solutions before considering it as a necessary step. +## Conclusion +Finetuning LLMs can be powerful but should be approached carefully. Start with simpler solutions and consider finetuning only after exhausting alternatives and identifying clear benefits. The next section will cover practical considerations for finetuning LLMs. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === -### Summary of Finetuning LLMs Documentation +### Summary: Getting Started with Finetuning LLMs -This guide provides a structured approach to finetuning large language models (LLMs) tailored to specific tasks. Key steps include selecting a use case, gathering data, choosing a base model, and evaluating success. +This guide provides a high-level overview of the initial steps for finetuning large language models (LLMs), focusing on selecting a use case, gathering data, choosing a base model, and evaluating success. #### Quick Assessment Questions Before starting, consider: 1. **Define Success**: Use measurable metrics (e.g., "95% accuracy in extracting order IDs"). 2. **Data Readiness**: Ensure data is prepared (e.g., "1000 labeled support tickets"). -3. **Task Consistency**: Focus on consistent tasks (e.g., "Convert email to 5 specific fields"). +3. **Task Consistency**: Aim for specific tasks (e.g., "Convert email to 5 specific fields"). 4. **Human Verification**: Ensure correctness can be verified (e.g., "Check if extracted date matches document"). #### Picking a Use Case -Choose a small, manageable use case that cannot be easily solved by non-LLM methods. Examples include: -- **Good Use Cases**: Structured data extraction, domain-specific classification, standardized response generation. -- **Challenging Use Cases**: Open-ended chat, creative writing, general knowledge QA. +Choose a small, manageable task that cannot be easily solved by non-LLM methods. For example, "triage customer support queries" is more specific than "answer all customer support emails." Ensure you can quickly evaluate the effectiveness of the approach. #### Picking Data -Select data that closely aligns with your use case. Aim for hundreds to thousands of examples. Examples of reusable data include: -- Customer support email responses. -- Manually extracted metadata. +Select data that closely aligns with your use case to minimize the need for extensive annotation. Aim for hundreds to thousands of examples. + +**Good Use Cases**: +- **Structured Data Extraction**: Extracting order details from emails (500-1000 annotated emails). +- **Domain-Specific Classification**: Categorizing support tickets (1000+ labeled examples). +- **Standardized Response Generation**: Generating responses from documentation (500+ pairs). + +**Challenging Use Cases**: +- **Open-ended Chat**: Hard to measure success; consider alternative methods. +- **Creative Writing**: Subjective quality; focus on specific formats. #### Success Indicators Evaluate your use case using indicators: -- **Task Scope**: Specific tasks (e.g., "Extract purchase date"). -- **Output Format**: Structured outputs vs. free-form text. -- **Data Availability**: Sufficient examples ready for use. -- **Evaluation Method**: Clear metrics vs. subjective feedback. -- **Business Impact**: Tangible benefits vs. vague goals. +- **Task Scope**: Specific tasks are better than vague ones. +- **Output Format**: Structured outputs are preferable. +- **Data Availability**: Ensure sufficient examples exist. +- **Evaluation Method**: Use clear metrics rather than subjective feedback. +- **Business Impact**: Define tangible benefits. #### Picking a Base Model -Select a base model based on your use case: -- **Llama 3.1 8B**: Best for structured data extraction, requires 16GB GPU RAM. -- **Llama 3.1 70B**: Suitable for complex reasoning, requires 80GB GPU RAM. -- **Mistral 7B**: Good for general text generation, requires 16GB GPU RAM. -- **Phi-2**: Ideal for lightweight tasks, requires 8GB GPU RAM. - -#### Model Selection Matrix -```mermaid -graph TD - A[Choose Your Task] --> B{Structured Output?} - B -->|Yes| C[Llama-8B Base] - B -->|No| D{Complex Reasoning?} - D -->|Yes| E[Llama-70B Base] - D -->|No| F{Resource Constrained?} - F -->|Yes| G[Phi-2] - F -->|No| H[Mistral-7B] -``` - -#### Evaluating Success -Define success metrics early. For structured data extraction, consider: +Choose a model based on your task requirements: +- **Llama 3.1 8B**: Best for structured data extraction and classification (16GB GPU RAM). +- **Llama 3.1 70B**: Suitable for complex reasoning (80GB GPU RAM). +- **Mistral 7B**: Good for general text generation (16GB GPU RAM). +- **Phi-2**: Ideal for lightweight tasks and rapid prototyping (8GB GPU RAM). + +#### Evaluation of Success +Define clear metrics for success, especially for structured data extraction. Metrics may include: - Accuracy of extracted fields. -- Precision and recall for specific types. -- Processing time and error rates. +- Precision and recall for specific field types. +- Processing time per document. #### Next Steps -With a clear understanding of scoping, data selection, and evaluation, proceed to the technical implementation, starting with practical examples using the Accelerate library. +With a clear understanding of scoping, data selection, and evaluation, proceed to practical implementation in the next section, which covers finetuning using the Accelerate library. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === -# Evaluation for LLM Finetuning +# Summary of LLM Finetuning Evaluations Documentation ## Overview -Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help ensure the model behaves as expected, catch issues early, and track progress over time. An incremental approach to building evaluations is recommended to avoid paralysis by analysis. +Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help ensure models behave as expected, catch issues early, and track progress over time. An incremental approach to building evaluation sets is recommended to avoid paralysis and facilitate early implementation. ## Motivation and Benefits -Key motivations for implementing evals include: -1. **Prevent Regressions**: Ensure new changes do not negatively impact existing functionality. -2. **Track Improvements**: Quantify model improvements with each iteration. +Key motivations for thorough evals include: +1. **Prevent Regressions**: Ensure new changes do not harm existing functionality. +2. **Track Improvements**: Quantify and visualize model enhancements. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. A robust evaluation strategy leads to more reliable and performant finetuned LLMs. ## Types of Evaluations -While generic evaluation frameworks are common, custom evaluations tailored to specific use cases are also important. They can be categorized into: - -1. **Success Modes**: Focus on desired outputs, such as: - - Correct formatting - - Appropriate responses to prompts - - Desired behavior in edge cases - -2. **Failure Modes**: Target undesired outputs, including: - - Hallucinations - - Incorrect formats - - Biased or incoherent responses +While generic evaluation frameworks are common, custom evaluations tailored to specific use cases are also important. Custom evals can be categorized into: +1. **Success Modes**: Focus on desired outputs (e.g., correct formatting, appropriate responses). +2. **Failure Modes**: Target undesired outputs (e.g., hallucinations, incorrect formats). ### Example Code for Custom Evals ```python @@ -1736,40 +1768,40 @@ good_responses = { } for question, answers in good_responses.items(): - llm_response = query_llm(question) - assert any(answer in llm_response for answer in answers) + assert any(answer in query_llm(question) for answer in answers) bad_responses = { "who is the manager of the shopping center?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): - llm_response = query_llm(question) - assert not any(answer in llm_response for answer in answers) + assert not any(answer in query_llm(question) for answer in answers) ``` ## Generalized Evals and Frameworks -Generalized evals provide structured approaches to evaluation, including: -- Organization of evals +Generalized evals provide structured evaluation approaches, including: +- Organizing evals - Standardized metrics -- Insights into overall performance +- Insights into model performance -Complement generalized evals with custom ones for specific needs. Recommended frameworks include: +Examples of frameworks include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) - [langcheck](https://github.com/citadel-ai/langcheck) - [nervaluate](https://github.com/MantisAI/nervaluate) +Integrating these frameworks into pipelines, such as in the `llm-lora-finetuning` project, is straightforward. + ## Data and Tracking -Regularly analyze inference data to identify patterns and areas for improvement. Implement comprehensive logging early on to track model behavior and progress. Consider using frameworks for structured data collection and analysis, such as: +Regular analysis of inference data is crucial for identifying patterns and areas for improvement. Implement comprehensive logging early on to track model behavior and performance. Recommended tools for data collection and analysis include: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) -Creating simple dashboards to visualize core metrics can effectively monitor progress and assess the impact of changes. Focus on key metrics aligned with iteration goals, and prioritize simplicity over perfection. +Creating simple dashboards to visualize core metrics can help monitor progress and assess the impact of changes. Prioritize simplicity over perfection in initial implementations. ================================================== @@ -1779,26 +1811,26 @@ Creating simple dashboards to visualize core metrics can effectively monitor pro After iterating on your finetuned model, consider the following key areas: -- Identify factors that improve or worsen model performance. -- Determine the minimum viable model size. -- Assess the feasibility of iteration within your company's hardware constraints. -- Ensure the model effectively addresses the business use case. +- **Model Improvement**: Identify what enhances or detracts from model performance. +- **Model Size Limits**: Determine the smallest viable model size. +- **Process Alignment**: Ensure iteration time aligns with company processes and hardware limitations. +- **Business Use Case**: Confirm the model effectively addresses the intended business problem. -Next stages may involve: +Next steps may involve: -- Scaling for more users or real-time scenarios. -- Meeting critical accuracy requirements, potentially necessitating a larger model. -- Integrating LLM finetuning into your business systems, including monitoring and evaluation. +- **Scaling**: Addressing increased user demand or real-time requirements. +- **Accuracy**: Fine-tuning larger models to meet critical accuracy needs. +- **Production Integration**: Incorporating monitoring, logging, and evaluation into your business systems. -While it may be tempting to switch to larger models, enhancing your dataset is often more impactful, especially if starting with limited examples. Focus on improving data quality before upgrading to more powerful models. +While it may be tempting to switch to larger models, focus on improving your data quality first, especially if starting with limited examples. Consider enhancing your dataset through a flywheel approach or synthetic data generation before upgrading your model. ## Resources -Recommended resources for further learning on LLM finetuning: +Recommended resources for LLM finetuning: -- [Mastering LLMs Course](https://parlance-labs.com/education/) - Video course by Hamel Husain and Dan Becker. -- [Phil Schmid's Blog](https://www.philschmid.de/) - Examples of LLM finetuning techniques. -- [Sam Witteveen's YouTube Channel](https://www.youtube.com/@samwitteveenai) - Videos on finetuning, prompt engineering, and base model explorations. +- **[Mastering LLMs Course](https://parlance-labs.com/education/)**: Video course by Hamel Husain and Dan Becker. +- **[Phil Schmid's Blog](https://www.philschmid.de/)**: Offers worked examples of LLM finetuning. +- **[Sam Witteveen's YouTube Channel](https://www.youtube.com/@samwitteveenai)**: Covers topics from finetuning to prompt engineering with practical examples. ================================================== @@ -1806,59 +1838,76 @@ Recommended resources for further learning on LLM finetuning: # Deployment Options for Finetuned LLMs -Deploying your finetuned LLM is essential for real-world applications. Key considerations include: +Deploying a finetuned LLM is essential for integrating your model into real-world applications. This process requires careful planning to ensure performance, reliability, and cost-effectiveness. ## Deployment Considerations -- **Resource Requirements**: LLMs need substantial RAM, processing power, and specialized hardware. Balance performance and cost based on use case. -- **Real-Time Needs**: Plan for immediate responses, failover scenarios, and conduct load testing. -- **Streaming vs. Non-Streaming**: Choose based on latency and resource use. -- **Optimization Techniques**: Use methods like quantization to reduce resource usage, but evaluate their impact on performance. + +Key factors influencing deployment include: + +- **Resource Requirements**: LLMs demand significant RAM and processing power. Choose hardware that balances performance and cost based on your use case. +- **Real-Time Needs**: Consider latency, failover scenarios, and load testing to prepare for user demand. +- **Streaming vs. Non-Streaming**: Each approach has trade-offs regarding latency and resource usage. +- **Optimization Techniques**: Methods like quantization can reduce resource usage but may affect performance, necessitating rigorous evaluation. ## Deployment Options and Trade-offs -1. **Roll Your Own**: Set up and manage your own infrastructure (e.g., Docker, FastAPI). Offers control but requires expertise. -2. **Serverless Options**: Scalable and cost-efficient, but may face latency issues due to cold starts. -3. **Always-On Options**: Minimizes latency but incurs costs during idle periods. -4. **Fully Managed Solutions**: Simplifies deployment but may limit flexibility and increase costs. -Consider team expertise, budget, load patterns, and specific requirements when choosing a deployment option. +1. **Roll Your Own**: Set up and manage your infrastructure for maximum control, typically using Docker (e.g., FastAPI). +2. **Serverless Options**: Scalable and cost-efficient, but may suffer from cold start latency. +3. **Always-On Options**: Constantly running models minimize latency but incur higher costs. +4. **Fully Managed Solutions**: Simplify deployment but may offer less flexibility and higher costs. + +Consider your team's expertise, budget, expected load, and specific requirements when selecting an option. ## Deployment with vLLM and ZenML -[vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML integrates with vLLM for easy deployment. +[vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML provides a [vLLM integration](../../../component-guide/model-deployers/vllm.md) for easy deployment. + +### Code Example ```python from zenml import pipeline -from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() -def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "my_finetuned_llm"]: +def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> VLLMDeploymentService: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` -The `model` argument can be a local path or a Hugging Face Hub ID. +The `model` argument can be a local path or a Hugging Face Hub ID, deploying the model locally for batch inference via an OpenAI-compatible API. ## Cloud-Specific Deployment Options -- **AWS**: Use Amazon SageMaker for managed ML services, or AWS Lambda with API Gateway for serverless deployment. Amazon ECS or EKS with Fargate offers more control. -- **GCP**: Google Cloud AI Platform provides managed services similar to SageMaker. Cloud Run offers serverless options, while GKE allows for containerized model deployment. -## Architectures for Real-Time Customer Engagement -Deploy models behind a load balancer with auto-scaling for responsiveness. Implement caching (e.g., Redis) for frequent responses and use message queues (e.g., Amazon SQS) for complex queries. Consider edge computing for global deployments to reduce latency. +- **AWS**: Use Amazon SageMaker for managed LLM deployment, AWS Lambda with API Gateway for serverless, or ECS/EKS with Fargate for more control. +- **GCP**: Google Cloud AI Platform offers managed services similar to SageMaker, while Cloud Run provides a serverless option. GKE is suitable for containerized models. + +## Architectures for Real-Time Engagement + +To engage customers in real-time, consider: + +- **Load Balancing**: Deploy multiple instances behind a load balancer with auto-scaling. +- **Caching**: Use Redis to store frequent responses and reduce model load. +- **Asynchronous Processing**: Implement message queues (e.g., SQS, Pub/Sub) for complex queries. +- **Edge Computing**: Utilize services like AWS Lambda@Edge for reduced latency. ## Reducing Latency and Increasing Throughput -- **Model Optimization**: Use quantization and distillation to enhance performance. -- **Hardware Acceleration**: Leverage GPU instances for faster inference. -- **Request Batching**: Process multiple inputs simultaneously to increase throughput. -- **Monitoring**: Continuously measure and optimize the deployment. + +Optimize for low latency and high throughput by: + +- **Model Optimization**: Use quantization and distillation to reduce model size and inference time. +- **Hardware Acceleration**: Leverage GPU instances for faster processing. +- **Request Batching**: Process multiple inputs in one forward pass. +- **Monitoring and Profiling**: Continuously measure and optimize your inference pipeline. ## Monitoring and Maintenance -Key areas to monitor include: + +Post-deployment, focus on: + 1. **Evaluation Failures**: Regularly assess model performance. -2. **Latency Metrics**: Ensure response times meet requirements. -3. **Load Patterns**: Analyze user interactions for scaling decisions. -4. **Data Analysis**: Identify trends and biases in model inputs/outputs. +2. **Latency Metrics**: Monitor response times. +3. **Load Patterns**: Analyze user interactions for scaling and optimization. +4. **Data Analysis**: Review inputs/outputs for trends and biases. -Ensure compliance with data protection regulations in logging practices. By implementing these strategies, you can maintain optimal performance of your finetuned LLM. +Ensure compliance with privacy regulations in your logging practices. By implementing these strategies, you can maintain optimal performance for your finetuned LLM. ================================================== @@ -1866,25 +1915,24 @@ Ensure compliance with data protection regulations in logging practices. By impl # Finetuning an LLM with Accelerate and PEFT -This documentation outlines the process of finetuning language models using the Viggo dataset, which consists of over 5,000 pairs of structured meaning representations and natural language descriptions for video game dialogues. The goal is to train models to generate fluent responses from these structured inputs. +This documentation outlines the process of finetuning a language model (LLM) using the Viggo dataset, which contains over 5,000 pairs of structured meaning representations and their corresponding natural language descriptions for video game dialogues. ## Finetuning Pipeline -The finetuning pipeline includes the following steps: - +The finetuning pipeline consists of the following steps: 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model before finetuning. 4. **evaluate_finetuned**: Evaluate the finetuned model. -5. **promote**: Promote the best-performing model to "staging" in the Model Control Plane. +5. **promote**: Promote the best model to "staging" in the Model Control Plane. For initial experiments, it is recommended to start with smaller models (e.g., Llama 3.1 family at ~8B parameters) to facilitate quick iterations. ## Implementation Details -The `prepare_data` step is minimal, focusing on loading and tokenizing the dataset. Care should be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs during finetuning is advised. +The `prepare_data` step loads data from the Hugging Face hub and tokenizes it. Care should be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs is advised. -The finetuning process utilizes the `accelerate` library for multi-GPU support. The core finetuning code is as follows: +Finetuning utilizes the `accelerate` library for multi-GPU support. The core finetuning code is as follows: ```python model = load_base_model(base_model_id, use_accelerate=use_accelerate) @@ -1896,10 +1944,8 @@ trainer = transformers.Trainer( args=transformers.TrainingArguments( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, - max_steps=max_steps, learning_rate=lr, logging_dir="./logs", - save_strategy="steps", evaluation_strategy="steps", do_eval=True, ), @@ -1908,19 +1954,26 @@ trainer = transformers.Trainer( ) ``` -Key points: -- `ZenMLCallback` logs metrics to ZenML. -- `gradient_checkpointing_kwargs` enables gradient checkpointing when using Accelerate. -- Evaluation metrics are computed using the `evaluate` library, focusing on ROUGE scores (ROUGE-N, ROUGE-L, ROUGE-W, ROUGE-S). +### Evaluation Metrics + +The evaluation uses the `evaluate` library to compute ROUGE scores: +- **ROUGE-N**: n-gram overlap. +- **ROUGE-L**: Longest Common Subsequence. +- **ROUGE-W**: Weighted Longest Common Subsequence. +- **ROUGE-S**: Skip-bigram statistics. + +These metrics help assess the quality of generated text. ## Using the ZenML Accelerate Decorator -ZenML offers a `@run_with_accelerate` decorator for simplified distributed training setup: +ZenML provides the `@run_with_accelerate` decorator for easier distributed training setup: ```python -@run_with_accelerate(num_processes=4, multi_gpu=True, mixed_precision='bf16') +from zenml.integrations.huggingface.steps import run_with_accelerate + +@run_with_accelerate(num_processes=4, multi_gpu=True) @step -def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id, output_dir): +def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id: str, output_dir: str): model = load_base_model(base_model_id, use_accelerate=True) trainer = transformers.Trainer( @@ -1931,32 +1984,42 @@ def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id, return trainer.model ``` -This approach separates distributed training configuration from model logic and requires a properly configured Docker environment with CUDA support. +### Docker Configuration -## Dataset Iteration +Ensure your Docker environment is configured with CUDA support and necessary dependencies: + +```python +from zenml import pipeline +from zenml.config import DockerSettings + +docker_settings = DockerSettings( + parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", + requirements=["accelerate", "torchvision"] +) + +@pipeline(settings={"docker": docker_settings}) +def finetuning_pipeline(...): + # Your pipeline steps here +``` -Careful attention to input data is crucial. Poor performance post-finetuning may indicate issues with data formatting or tokenizer mismatches. Regular inspection of data at all stages is recommended. Consider supplementing or synthetically generating data if necessary. +## Dataset Iteration -As evaluations are established, focus on optimal parameters and their effects. Future considerations include: -- Improved evaluations -- Model serving and inference -- Integration within existing production architecture +Careful attention to input data is crucial. Poorly formatted data can lead to degraded model performance. Regular inspection of data at all stages is recommended. Consider augmenting or synthetically generating data if needed. -A goal may be to minimize model size while maintaining acceptable performance for specific use cases. Evaluations play a key role in achieving this balance. +As you progress, focus on evaluations and optimal parameters to measure model performance. Consider how to effectively serve the model and integrate it into existing architectures. Strive for smaller models that meet your use case requirements, as they often yield better outcomes. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md === -### Summary of Data Ingestion and Preprocessing for RAG Pipelines with ZenML +### Summary: Ingesting and Preprocessing Data for RAG Pipelines with ZenML -**Overview**: This documentation outlines the steps to ingest and preprocess data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. The process involves scraping, loading, and preprocessing documents to train retriever and generator models. +This documentation outlines the process of ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. #### Data Ingestion - -1. **Initial Setup**: The first step is to gather a corpus of documents and relevant metadata. ZenML can integrate with various tools for data ingestion, preprocessing, and indexing. - -2. **URL Scraping**: A ZenML step is created to scrape URLs from ZenML documentation. The `url_scraper` function utilizes a helper utility to retrieve a unique set of URLs. +1. **Purpose**: The initial step involves ingesting data (documents and metadata) for training retriever and generator models. +2. **Integration**: ZenML integrates with various tools for managing data ingestion, including downloading, preprocessing, and indexing documents. +3. **URL Scraping**: A ZenML step can be created to scrape relevant URLs from ZenML documentation: ```python from typing import List @@ -1965,13 +2028,15 @@ A goal may be to minimize model size while maintaining acceptable performance fo from steps.url_scraping_utils import get_all_pages @step - def url_scraper() -> Annotated[List[str], "urls"]: - docs_urls = get_all_pages("https://docs.zenml.io") + def url_scraper(docs_url: str = "https://docs.zenml.io") -> Annotated[List[str], "urls"]: + docs_urls = get_all_pages(docs_url) log_artifact_metadata({"count": len(docs_urls)}) return docs_urls ``` -3. **Loading Documents**: The `web_url_loader` step loads and parses HTML pages using the `unstructured` library, simplifying text extraction. + - The `get_all_pages` function retrieves unique URLs from the documentation, focusing on the latest releases. + +4. **Document Loading**: The `unstructured` library is used to load and parse HTML pages: ```python from typing import List @@ -1980,76 +2045,81 @@ A goal may be to minimize model size while maintaining acceptable performance fo @step def web_url_loader(urls: List[str]) -> List[str]: - return ["\n\n".join([str(el) for el in partition_html(url)]) for url in urls] + document_texts = [] + for url in urls: + elements = partition_html(url=url) + document_texts.append("\n\n".join(map(str, elements))) + return document_texts ``` #### Data Preprocessing - -1. **Chunking Documents**: After loading, documents are preprocessed into manageable chunks. The `preprocess_documents` step splits long strings into smaller segments, balancing chunk size and overlap. - +1. **Chunking Strategy**: After loading documents, they need to be split into smaller chunks for efficient processing. The chunk size is critical for balancing retrieval effectiveness and LLM processing speed. + ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step + logging.basicConfig(level=logging.INFO) + logger = logging.getLogger(__name__) + @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: - log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) - return split_documents(documents, chunk_size=500, chunk_overlap=50) + try: + log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) + return split_documents(documents, chunk_size=500, chunk_overlap=50) + except Exception as e: + logger.error(f"Error in preprocess_documents: {e}") + raise ``` -2. **Chunk Size Considerations**: Choosing an appropriate chunk size is crucial. For documentation, a chunk size of 500 with a 50-character overlap is recommended to ensure relevant information is retained. + - The example uses a chunk size of 500 with a 50-character overlap to ensure important information is retained across chunks. -#### Additional Notes - -- The documentation emphasizes the importance of understanding data structure to determine optimal chunk sizes. -- More complex preprocessing, such as text cleaning or metadata extraction, can be added as needed. -- For complete code examples and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +#### Additional Considerations +- Depending on the data structure, chunk sizes may vary. Larger chunks may be necessary for complex concepts, while smaller chunks may suit conversational data. +- Further preprocessing may include text cleaning, handling code snippets, and metadata extraction. -This summary captures the essential steps and code snippets for setting up a RAG pipeline with ZenML, focusing on data ingestion and preprocessing. +For complete code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and the specific [steps code](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md === -### Summary of RAG Inference Documentation +# Simple RAG Inference Summary -This documentation outlines how to use Retrieval-Augmented Generation (RAG) components to generate responses based on queries using an index store of documents. +This documentation outlines the process of using RAG (Retrieval-Augmented Generation) components to generate responses based on indexed documents without requiring external libraries beyond the LLM interface and index store. -#### Running Inference +## Inference Query -To execute a query, use the following command in Python: +To run a query against the index store, use the following command: ```bash -python run.py --rag-query "your query here" --model=gpt4 +python run.py --rag-query "how do I use a custom materializer inside my own zenml steps? i.e. how do I set it? inside the @step decorator?" --model=gpt4 ``` -This command triggers a function call that utilizes the outputs and components of the RAG pipeline. +## Inference Pipeline Code -#### Inference Pipeline Code - -The core function for processing input with retrieval is defined as follows: +The inference pipeline consists of the following key function: ```python def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: - delimiter = "```" related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) - system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features, and use cases. Respond in a concise, technically credible tone using only ZenML documentation.""" + system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features, and its use cases. You respond in a concise, technically credible tone. You ONLY use the context from the ZenML documentation to provide relevant answers. If you are unsure or don't know, just say so.""" messages = [ {"role": "system", "content": system_message}, - {"role": "user", "content": f"{delimiter}{input}{delimiter}"}, + {"role": "user", "content": f"```{input}```"}, {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, ] return get_completion_from_messages(messages, model=model) ``` -#### Document Retrieval +### Document Retrieval -The `get_topn_similar_docs` function retrieves the most similar documents based on the query embedding: +The function `get_topn_similar_docs` retrieves the most similar documents based on the query embedding: ```python def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: @@ -2060,43 +2130,42 @@ def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extension return cur.fetchall() ``` -This function leverages the `pgvector` PostgreSQL extension for efficient similarity search. +This function utilizes the `pgvector` PostgreSQL plugin to efficiently order documents by similarity. -#### Generating Responses +### Generating Responses -The `get_completion_from_messages` function generates a response from the LLM: +The `get_completion_from_messages` function generates a response using the specified LLM: ```python def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): + model = MODEL_NAME_MAP.get(model, model) completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) return completion_response.choices[0].message.content ``` -The `litellm` library provides a unified interface for various LLMs, facilitating experimentation with different models without code rewrites. +`litellm` serves as a universal interface for various LLMs, allowing flexibility in model selection. -#### Conclusion +## Conclusion -This basic RAG inference pipeline retrieves relevant text chunks based on a query, laying the groundwork for more complex setups and potential improvements in retrieval performance through fine-tuning embeddings. +This basic RAG inference pipeline retrieves relevant text chunks based on a query and generates responses using the indexed documents. Future sections will discuss improving retrieval by fine-tuning embeddings for better performance with diverse document sets. -For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py) file. +For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [llm_utils.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database.md === -### Summary: Storing Embeddings in a Vector Database +### Summary of Storing Embeddings in a Vector Database -This guide explains how to store embeddings in a vector database, specifically using PostgreSQL, to facilitate efficient retrieval based on similarity to queries. Storing embeddings allows for quick access without the need to regenerate them each time. +This documentation outlines the process of storing embeddings in a vector database, specifically PostgreSQL, for efficient retrieval based on similarity to queries. #### Key Points: -- **Vector Database**: PostgreSQL is chosen for its scalability and efficiency in handling high-dimensional vectors. Other vector databases can also be used. -- **Setup Instructions**: For setting up PostgreSQL, refer to the [repository instructions](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +- **Purpose**: Store embeddings to avoid regenerating them for every document retrieval. +- **Database Choice**: PostgreSQL is recommended due to its scalability and efficiency for high-dimensional vector storage. Other vector databases can also be used. +- **Setup**: Instructions for setting up PostgreSQL using Supabase are available in the ZenML repository. -#### Connection and Interaction: -- Use the `psycopg2` package for database connections and raw SQL for interactions. - #### Code Overview: -The following Python code snippet demonstrates the process of indexing documents and their embeddings: +The following Python code demonstrates how to create and populate an embeddings table in PostgreSQL using the `psycopg2` package: ```python from zenml import step @@ -2122,11 +2191,8 @@ def index_generator(documents: List[Document]) -> None: """) conn.commit() - register_vector(conn) - for doc in documents: - cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)) - if cur.fetchone()[0] == 0: + if cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)).fetchone()[0] == 0: cur.execute(""" INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)""", @@ -2149,18 +2215,16 @@ def index_generator(documents: List[Document]) -> None: #### Functionality: - Connects to the database and creates the `vector` extension. -- Creates an `embeddings` table if it doesn't exist. -- Inserts documents and embeddings only if they are not already present. +- Creates an `embeddings` table if it does not exist. +- Inserts new embeddings only if they are not already present. - Calculates index parameters and creates an index using the `ivfflat` method for cosine similarity. #### Considerations: -- Decide when to update embeddings based on data change frequency. -- For large datasets, consider running on a GPU-enabled machine for performance. - -#### Next Steps: -After storing embeddings, the next step involves retrieving relevant documents based on queries, enhancing the efficiency of the question-answering system. +- The decision to update embeddings depends on data change frequency. +- Running this step on a GPU-enabled machine may improve performance for larger datasets. +- The index is optimized for similarity search, allowing for efficient retrieval of relevant documents based on queries. -For the complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +For full code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== @@ -2170,103 +2234,81 @@ For the complete code and further details, refer to the [Complete Guide](https:/ This documentation outlines a simple implementation of a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: -1. **Data Loading**: Utilizes a fictional dataset about 'ZenML World' as the corpus. -2. **Text Processing**: Splits text into chunks and tokenizes it (converts text into words). -3. **Query Handling**: Accepts a user query and retrieves the most relevant text chunks from the corpus. -4. **Response Generation**: Uses OpenAI's GPT-3.5 model to generate answers based on the relevant chunks. - -### Key Functions +1. **Data Loading**: Uses a fictional dataset about "ZenML World" as the corpus. +2. **Text Processing**: Splits text into chunks and tokenizes it (converts to words). +3. **Query Handling**: Accepts a user query and retrieves the most relevant text chunks. +4. **Answer Generation**: Utilizes OpenAI's GPT-3.5 model to generate answers based on the retrieved chunks. -- **`preprocess_text(text)`**: - - Converts text to lowercase, removes punctuation, and trims whitespace. +#### Key Functions -- **`tokenize(text)`**: - - Tokenizes preprocessed text into words. +- **`preprocess_text(text)`**: Normalizes the text by converting to lowercase, removing punctuation, and trimming whitespace. + +- **`tokenize(text)`**: Tokenizes the preprocessed text into words. - **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - - Calculates Jaccard similarity between the query and corpus chunks to find the top `n` relevant chunks. + - Tokenizes the query. + - Computes Jaccard similarity between the query and each chunk in the corpus. + - Returns the top N relevant chunks based on similarity. - **`answer_question(query, corpus, top_n=2)`**: - - Retrieves relevant chunks and generates an answer using the OpenAI API. + - Retrieves relevant chunks using `retrieve_relevant_chunks`. + - Constructs a context string from the relevant chunks. + - Uses OpenAI's API to generate an answer based on the context. -### Example Code +#### Example Corpus ```python -import os -import re -import string -from openai import OpenAI - -def preprocess_text(text): - return re.sub(r"\s+", " ", text.lower().translate(str.maketrans("", "", string.punctuation))).strip() - -def tokenize(text): - return preprocess_text(text).split() - -def retrieve_relevant_chunks(query, corpus, top_n=2): - query_tokens = set(tokenize(query)) - similarities = [(chunk, len(query_tokens.intersection(tokenize(chunk))) / len(query_tokens.union(tokenize(chunk)))) for chunk in corpus] - return [chunk for chunk, _ in sorted(similarities, key=lambda x: x[1], reverse=True)[:top_n]] - -def answer_question(query, corpus, top_n=2): - relevant_chunks = retrieve_relevant_chunks(query, corpus, top_n) - if not relevant_chunks: - return "I don't have enough information to answer the question." - context = "\n".join(relevant_chunks) - client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) - response = client.chat.completions.create( - messages=[ - {"role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}"}, - {"role": "user", "content": query}, - ], - model="gpt-3.5-turbo", - ) - return response.choices[0].message.content.strip() - -# Sample corpus corpus = [ - "The luminescent forests of ZenML World are inhabited by glowing Zenbots.", - "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully.", - "Telepathic Treants communicate through a quantum neural network.", - "Deep within the melodic caverns, Fractal Fungi create a symphony of sounds.", - "Holographic Hummingbirds hover near ethereal waterfalls.", - "Gravitational Geckos traverse inverted cliffs.", - "Plasma Phoenixes soar above the chromatic canyons.", - "Crystalline Crabs scuttle along the prismatic shores." + "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", + "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", + "Telepathic Treants, ancient sentient trees, communicate through the quantum neural network...", + # Additional sentences... ] +``` -corpus = [preprocess_text(sentence) for sentence in corpus] +#### Example Queries -# Example queries -questions = [ - "What are Plasma Phoenixes?", - "What kinds of creatures live on the prismatic shores of ZenML World?", - "What is the capital of Panglossia?" -] +```python +question1 = "What are Plasma Phoenixes?" +answer1 = answer_question(question1, corpus) + +question2 = "What kinds of creatures live on the prismatic shores of ZenML World?" +answer2 = answer_question(question2, corpus) -for question in questions: - print(f"Question: {question}") - print(f"Answer: {answer_question(question, corpus)}") +irrelevant_question_3 = "What is the capital of Panglossia?" +answer3 = answer_question(irrelevant_question_3, corpus) ``` -### Output Example -The implementation generates answers based on the provided context, demonstrating the basic functionality of the RAG pipeline. The similarity check is simplistic, using Jaccard similarity, which can be improved with more advanced techniques in future iterations. +#### Output + +The output provides answers based on the relevant context retrieved from the corpus. If a question is not covered by the corpus, it returns a default response indicating insufficient information. + +#### Technical Notes + +- The similarity check uses the Jaccard coefficient, which is a basic method for measuring text similarity. +- This implementation is not optimized for performance or scalability; it serves as an illustrative example for understanding the RAG pipeline's components. + +For more advanced implementations, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md === -### Generating Embeddings for Retrieval +### Summary: Generating Embeddings for Retrieval -This section outlines the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data in a high-dimensional space, enabling the retrieval of relevant information based on similarity rather than simple keyword matching. +This documentation outlines the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data, allowing for improved retrieval of relevant information based on similarity rather than just keyword matching. -#### Key Concepts: -- **Embeddings**: High-dimensional vectors that represent data semantically, generated using models like those from the `sentence-transformers` library. -- **Purpose**: To improve retrieval accuracy by capturing the context of the data, allowing for better responses to user queries. +#### Key Points: + +- **Embeddings**: High-dimensional vector representations of data that facilitate semantic understanding. They are generated using models from the `sentence-transformers` library, which provides pre-trained models for encoding text. + +- **Purpose**: To quickly identify relevant data chunks during inference, improving the accuracy and relevance of responses to user queries. + +- **Model Used**: The `sentence-transformers/all-MiniLM-L12-v2` model is employed, producing embeddings with a dimensionality of 384. Smaller models can be used for speed, while larger models may enhance retrieval capabilities. -#### Code Implementation: -The following Python code demonstrates how to generate embeddings for a list of documents: +- **Dimensionality Reduction**: Techniques like UMAP and t-SNE can visualize embeddings in 2D, helping to identify patterns and relationships in the data. +#### Code Example for Generating Embeddings: ```python from typing import Annotated, List import numpy as np @@ -2281,91 +2323,70 @@ def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Docum document_texts = [doc.page_content for doc in split_documents] embeddings = model.encode(document_texts) - + for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding - + return split_documents ``` -- **Model**: The `sentence-transformers/all-MiniLM-L12-v2` model generates embeddings with a dimensionality of 384. -- **Document Model Update**: The `Document` model is updated to include an `embedding` attribute for storing generated embeddings. - -#### Visualization: -To visualize the embeddings, dimensionality reduction techniques like t-SNE and UMAP can be applied: +#### Visualization Code: +Two functions are provided for visualizing the embeddings using t-SNE and UMAP: ```python -from matplotlib.colors import ListedColormap -import matplotlib.pyplot as plt -import numpy as np from sklearn.manifold import TSNE import umap -from zenml.client import Client - -artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID') -embeddings = np.array([doc.embedding for doc in documents]) -parent_sections = [doc.parent_section for doc in documents] - -# Color mapping -unique_parent_sections = list(set(parent_sections)) -tol_colors = ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"] -section_color_dict = dict(zip(unique_parent_sections, tol_colors[:len(unique_parent_sections)])) +import matplotlib.pyplot as plt -def visualize(embeddings, parent_sections, method='tsne'): - if method == 'tsne': - embeddings_2d = TSNE(n_components=2, random_state=42).fit_transform(embeddings) - else: # method == 'umap' - embeddings_2d = umap.UMAP(n_components=2, random_state=42).fit_transform(embeddings) +def tsne_visualization(embeddings, parent_sections): + tsne = TSNE(n_components=2, random_state=42) + embeddings_2d = tsne.fit_transform(embeddings) + # Plotting code... - plt.figure(figsize=(8, 8)) - for section in unique_parent_sections: - mask = [section == ps for ps in parent_sections] - plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], c=section_color_dict[section], label=section) - plt.title(f"{method.upper()} Visualization") - plt.legend() - plt.show() +def umap_visualization(embeddings, parent_sections): + umap_2d = umap.UMAP(n_components=2, random_state=42) + embeddings_2d = umap_2d.fit_transform(embeddings) + # Plotting code... ``` -- **Visualization**: The embeddings can be visualized using either t-SNE or UMAP, allowing for an understanding of how similar chunks are grouped based on their semantic meaning. - #### Conclusion: -This process generates and visualizes embeddings, which can be stored as artifacts for retrieval in a vector database, enhancing the RAG pipeline's performance. For further details, refer to the complete code in the [GitHub repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +This stage emphasizes the importance of embeddings in RAG pipelines, allowing for modular and flexible integration with vector databases for efficient retrieval. For further details, refer to the complete code in the ZenML GitHub repository. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md === -### Understanding Retrieval-Augmented Generation (RAG) +### Summary of Retrieval-Augmented Generation (RAG) -**Overview**: RAG enhances LLM capabilities by integrating a retrieval mechanism to fetch relevant documents from a large corpus, addressing LLM limitations such as incorrect responses and token constraints. Proposed by Facebook in 2020, RAG combines retrieval and generation strengths, making it effective for tasks like question answering, summarization, and dialogue generation. +**Overview of RAG** +Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to inform response generation. This method addresses LLM limitations, such as generating incorrect responses and handling extensive text inputs, by grounding outputs in relevant information. -#### RAG Pipeline Process +**RAG Pipeline Process** 1. **Retriever**: Identifies relevant documents from a corpus. -2. **Generator**: Produces responses based on retrieved documents. - - **Benefits**: - - Reduces incorrect responses by grounding answers in relevant information. - - Mitigates token limitations by focusing on a smaller document set. - - Cost-effective by optimizing resource usage. - -#### When to Use RAG -- Ideal for generating long-form responses requiring contextual understanding. -- Suitable for tasks needing grounded information. -- A practical starting point for exploring LLMs due to lower data and resource requirements. - -#### RAG in the ZenML Ecosystem -- ZenML facilitates RAG pipeline setup, integrating retrieval and generation models. -- Offers tools for data ingestion, index management, and artifact tracking. -- Supports scaling to complex setups, including fine-tuning and document reranking. - -**Advantages of ZenML**: -- **Reproducibility**: Easily rerun pipelines with preserved artifact versions for performance comparison. -- **Scalability**: Handle larger document corpora via cloud deployment and scalable vector stores. -- **Artifact Tracking**: Monitor and debug pipeline performance with associated metadata. -- **Maintainability**: Modular pipeline format allows easy updates and experimentation. -- **Collaboration**: Share pipelines and insights with team members using the ZenML dashboard. +2. **Generator**: Produces a response based on the retrieved documents. -### Summary -RAG is a powerful technique for enhancing LLMs by combining retrieval and generation, making it suitable for various applications. ZenML provides a robust framework for implementing and managing RAG pipelines, ensuring reproducibility, scalability, and collaboration. +This dual approach is effective for tasks requiring contextual understanding, such as question answering, summarization, and dialogue generation. It reduces the risk of inaccuracies and token limitations by focusing on a smaller, relevant document set, making it more cost-effective than pure generation-based methods. + +**When to Use RAG** +RAG is ideal for: +- Generating long-form responses needing contextual understanding. +- Tasks like question answering, summarization, and dialogue generation. +- Users new to LLMs, as it requires fewer resources and data compared to other methods. + +**Integration with ZenML** +ZenML facilitates the creation of RAG pipelines, providing tools for: +- Data ingestion and index management. +- Tracking RAG artifacts (hyperparameters, model weights, etc.) in the Model Control Plane. +- Scaling pipelines for larger document corpora and complex setups (e.g., finetuning embeddings, reranking documents). + +**Advantages of ZenML** +- **Reproducibility**: Rerun pipelines to update documents or parameters while preserving previous versions. +- **Scalability**: Deploy on cloud providers for larger document handling. +- **Artifact Tracking**: Monitor and debug pipeline performance through metadata and visualizations in the ZenML dashboard. +- **Maintainability**: Modular pipeline structure allows easy updates and experimentation. +- **Collaboration**: Share pipelines and insights with team members. + +ZenML provides a structured approach to building RAG pipelines, setting the stage for more advanced functionalities in future sections. ================================================== @@ -2373,37 +2394,41 @@ RAG is a powerful technique for enhancing LLMs by combining retrieval and genera ### RAG Pipelines with ZenML -**Overview**: Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models to enhance the capabilities of Large Language Models (LLMs). This guide outlines the setup of RAG pipelines using ZenML, covering essential components such as data ingestion, index management, and artifact tracking. +Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models to enhance the capabilities of Large Language Models (LLMs). This guide outlines the setup of RAG pipelines using ZenML, focusing on key components such as data ingestion, index store management, and tracking artifacts. -**Key Points**: -- **LLM Limitations**: LLMs can generate human-like responses but may produce incorrect or inappropriate outputs, especially with ambiguous prompts. Most LLMs have token limits, with many handling significantly less than 1 million tokens, unlike some advanced models like Google's Gemini 1.5 Pro. +#### Key Topics Covered: +- **Purpose of RAG**: Addresses limitations of LLMs, which can generate incorrect responses, especially with ambiguous prompts, and have constraints on text length (most open-source LLMs handle fewer tokens than advanced models like Google's Gemini 1.5 Pro). -- **RAG Pipeline Components**: - 1. **Purpose of RAG**: Addresses the limitations of LLMs by integrating retrieval mechanisms. - 2. **Data Ingestion**: Process of collecting and preparing data for the pipeline. - 3. **Embeddings**: Utilization of embeddings to represent data, forming the basis for retrieval. - 4. **Vector Database**: Storage of embeddings in a vector database for efficient retrieval. - 5. **Artifact Tracking**: Use ZenML to track artifacts associated with the RAG process. +- **Data Ingestion and Preprocessing**: Steps to prepare data for the RAG pipeline. + +- **Embeddings**: Utilizing embeddings to represent data, forming the basis for the retrieval mechanism. + +- **Vector Database**: Storing embeddings efficiently for retrieval. -**Conclusion**: The guide culminates in demonstrating how these components work together to execute basic RAG inference. +- **Artifact Tracking**: Using ZenML to track RAG-related artifacts. + +#### Conclusion: +The guide culminates in demonstrating the integration of all components for basic RAG inference. For the latest documentation, refer to [ZenML's official site](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md === -### Implementing Reranking in ZenML +# Implementing Reranking in ZenML + +This documentation outlines how to implement reranking in ZenML within a RAG (Retrieval-Augmented Generation) pipeline. The reranker reorders retrieved documents based on their relevance to a given query. + +## Adding Reranking -This documentation outlines the integration of a reranker into an existing RAG (Retrieval-Augmented Generation) pipeline using the [`rerankers`](https://github.com/AnswerDotAI/rerankers/) package. The reranker reorders retrieved documents based on their relevance to a query. +The [`rerankers`](https://github.com/AnswerDotAI/rerankers/) package is used to integrate reranking into the pipeline. It provides a `Reranker` abstract class for custom implementations and supports various model types, including those from Hugging Face Hub and API-driven models. -#### Reranker Overview -- The `Reranker` abstract class allows the creation of custom rerankers or the use of pre-built models. -- It takes a query and a list of documents, returning a reordered list based on reranking scores. +### Example Code -#### Example Code for Reranking ```python from rerankers import Reranker ranker = Reranker('cross-encoder') + texts = [ "I like to play soccer", "I like to play football", @@ -2412,23 +2437,48 @@ texts = [ "Ginger cats aren't very smart", "I like to play basketball", ] + results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` -The output will reorder documents, prioritizing those relevant to the query. -#### Reranking Function +### Sample Output + +```python +RankedResults( + results=[ + Result(doc_id=5, text='I like to play basketball', score=-0.465, rank=1), + Result(doc_id=0, text='I like to play soccer', score=-0.735, rank=2), + Result(doc_id=1, text='I like to play football', score=-0.968, rank=3), + Result(doc_id=2, text='War and Peace is a great book', score=-5.402, rank=4), + Result(doc_id=3, text='I love dogs', score=-5.586, rank=5), + Result(doc_id=4, text="Ginger cats aren't very smart", score=-5.949, rank=6) + ], + query="What's your favorite sport?", + has_scores=True +) +``` + +The reranker outputs documents ordered by relevance, with sports-related texts prioritized. + +### Rerank Function + A helper function can be added to rerank documents: + ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) + return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` -This function returns a list of tuples containing reranked document text and original URLs. -#### Querying Similar Documents -The reranking function is utilized in a querying function: +This function takes a query and a list of documents (content and URL) and returns reranked documents with their original URLs. + +### Query Function + +The rerank function can be used in a querying function: + ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: embedded_question = get_embeddings(question) @@ -2444,43 +2494,42 @@ def query_similar_docs(question: str, url_ending: str, use_reranking: bool = Fal return (question, url_ending, urls) ``` -This function retrieves similar documents based on a question, optionally reranking them before returning the top URLs. -#### Further Exploration -For complete code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. +This function retrieves similar documents based on a question and optionally reranks them, returning the top five URLs. + +### Evaluation + +After integrating reranking, evaluate its performance to assess the quality of retrieved documents. For full code exploration, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md === -### Evaluating Reranking Performance +### Evaluating Reranking Performance with ZenML -This documentation outlines how to evaluate the performance of a reranking model using ZenML, focusing on comparing retrieval performance before and after reranking. +This documentation outlines how to evaluate the performance of a reranking model in ZenML. The evaluation process involves comparing retrieval performance before and after applying reranking using established metrics. -#### Evaluation Process +#### Key Steps in Evaluation -1. **Retrieval Evaluation**: The evaluation begins by comparing retrieval performance using a set of queries and relevant documents. The following code snippet implements the evaluation: +1. **Retrieval Evaluation Function**: + The `perform_retrieval_evaluation` function assesses retrieval performance based on a sample of generated questions and relevant documents. It checks if the expected URL ending is present in the retrieved URLs and calculates the failure rate. ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) + failures = sum(1 for item in sampled_dataset if not check_retrieval(item, use_reranking)) + return round((failures / len(sampled_dataset)) * 100, 2) - failures = sum( - 1 for item in sampled_dataset if not any( - item["filename"].split("/")[-1] in url for url in query_similar_docs( - item["generated_questions"][0], item["filename"].split("/")[-1], use_reranking)[2] - ) - ) - - failure_rate = (failures / len(sampled_dataset)) * 100 - return round(failure_rate, 2) + def check_retrieval(item, use_reranking): + question = item["generated_questions"][0] + url_ending = item["filename"].split("/")[-1] + _, _, urls = query_similar_docs(question, url_ending, use_reranking) + return url_ending in urls ``` - This function evaluates retrieval performance by checking if the expected URL ending is present in the retrieved URLs, returning the failure rate. - -2. **Evaluation Steps**: Two separate steps are defined for evaluation: - +2. **Evaluation Steps**: + Two steps are defined to evaluate retrieval performance with and without reranking: ```python @step def retrieval_evaluation_full(sample_size: int = 100) -> float: @@ -2491,47 +2540,43 @@ This documentation outlines how to evaluate the performance of a reranking model return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` - These steps log and return the failure rates for retrieval systems with and without reranking. - -3. **Logging Failures**: Specific failure examples can be viewed in the logs, which help identify issues with generated questions. +3. **Logging and Analysis**: + The results of the evaluations can be logged for analysis, allowing users to inspect specific failures. -#### Visualizing Reranking Performance - -ZenML allows visualization of evaluation results. The following code creates a bar chart of various evaluation metrics: +4. **Visualization**: + Visualization of evaluation results can be achieved using a bar chart to compare failure rates and other metrics: + ```python + @step(enable_cache=False) + def visualize_evaluation_results(...): + scores = normalize_scores([...]) + fig, ax = plt.subplots(figsize=(10, 6)) + ax.barh(y_pos, scores, align="center") + ax.set_title(f"Evaluation Metrics for {pipeline_run_name}") + plt.tight_layout() + return save_plot_to_image(fig) + ``` -```python -@step(enable_cache=False) -def visualize_evaluation_results(...): - scores = [score / 20 for score in [small_retrieval_eval_failure_rate, ...]] - labels = ["Small Retrieval Eval Failure Rate", ...] - - fig, ax = plt.subplots(figsize=(10, 6)) - ax.barh(np.arange(len(labels)), scores, align="center") - ax.set_yticks(np.arange(len(labels))) - ax.set_yticklabels(labels) - ax.set_title(f"Evaluation Metrics for {step_context.pipeline_run.name}") - plt.tight_layout() - plt.savefig(buf, format="png") - return Image.open(buf) -``` +#### Running the Evaluation Pipeline -This function normalizes scores and generates a horizontal bar chart to visualize the evaluation metrics. +To run the evaluation pipeline: -#### Running the Evaluation Pipeline +1. Clone the project repository: + ```bash + git clone https://github.com/zenml-io/zenml-projects.git + ``` -To run the evaluation pipeline, clone the project repository and execute the evaluation command: +2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions. -```bash -git clone https://github.com/zenml-io/zenml-projects.git -cd llm-complete-guide -python run.py --evaluation -``` +3. Execute the evaluation pipeline: + ```bash + python run.py --evaluation + ``` -This will execute the evaluation pipeline and display results on the dashboard. +This will output results to the ZenML dashboard, allowing for further inspection of performance metrics and logs. ### Conclusion -The documentation provides a comprehensive guide to evaluating reranking models in ZenML, including performance comparison, logging, visualization, and execution of the evaluation pipeline. +The documentation provides a clear framework for evaluating reranking models in ZenML, emphasizing the importance of comparing retrieval performance and visualizing results for better insights into model effectiveness. ================================================== @@ -2539,62 +2584,69 @@ The documentation provides a comprehensive guide to evaluating reranking models ### Summary: Adding Reranking to RAG Inference in ZenML -Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving their quality. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. - -#### Key Points: -- **Reranking Purpose**: Increases relevance and quality of retrieved documents, leading to better LLM responses. -- **Workflow Context**: Reranking is an optional enhancement to an existing workflow that includes data ingestion, preprocessing, embeddings generation, and retrieval. -- **Evaluation Metrics**: Basic metrics are established to assess retrieval performance. +Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section outlines how to integrate a reranker into your RAG inference pipeline in ZenML. -#### Visual Reference: -- A workflow diagram illustrates the reranking process within the overall system. +**Key Points:** +- Rerankers are optional but can significantly enhance the relevance and quality of retrieved documents, leading to better responses from LLMs. +- The overall workflow includes data ingestion, preprocessing, embeddings generation, and retrieval, followed by evaluation metrics to assess performance. +- Reranking is an additional step that can be added to the existing setup for improved performance. -By implementing a reranker, users can optimize their retrieval systems for enhanced performance. +For more details and the latest updates, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md === -## Reranking Overview +## Summary of Reranking in Retrieval-Augmented Generation (RAG) -### What is Reranking? -Reranking refines the initial ranking of documents retrieved by a system, particularly in Retrieval-Augmented Generation (RAG). The initial retrieval often uses sparse methods like BM25 or TF-IDF, which may not effectively capture semantic meaning. Rerankers reorder documents by considering features such as semantic similarity and relevance scores, ensuring the LLM accesses the most relevant context for generating responses. +### Definition +Reranking refines the initial ranking of documents retrieved by a system, enhancing the relevance and quality of documents used for generating outputs in RAG. The initial retrieval typically employs sparse methods like BM25 or TF-IDF, which may not fully capture semantic meaning. Rerankers reorder documents based on features such as semantic similarity and relevance scores. ### Types of Rerankers -1. **Cross-Encoders**: Combine the query and document as input to produce a relevance score. They effectively capture interactions but are computationally intensive (e.g., BERT-based models). - -2. **Bi-Encoders**: Use separate encoders for the query and document, generating independent embeddings and computing similarity. They are more efficient but less effective at capturing interactions. - -3. **Lightweight Models**: Smaller, faster models (e.g., distilled versions) balance effectiveness and efficiency, suitable for real-time applications. +1. **Cross-Encoders**: + - Input: Concatenated query and document. + - Output: Relevance score. + - Example: BERT-based models. + - **Pros**: Effective interaction capture. + - **Cons**: Computationally expensive. + +2. **Bi-Encoders**: + - Input: Separate encoders for query and document. + - Output: Similarity score from independent embeddings. + - **Pros**: More efficient. + - **Cons**: Weaker interaction capture. + +3. **Lightweight Models**: + - Examples: Distilled models or small transformer variants. + - **Pros**: Faster and smaller footprint for real-time use. ### Benefits of Reranking in RAG -1. **Improved Relevance**: Identifies the most relevant documents for a query, enhancing the LLM's context. - -2. **Semantic Understanding**: Captures semantic meaning beyond keyword matching, retrieving semantically similar documents. - -3. **Domain Adaptation**: Fine-tuned on specific data to incorporate domain knowledge, improving performance in targeted industries. - +1. **Improved Relevance**: Identifies the most relevant documents for accurate LLM responses. +2. **Semantic Understanding**: Captures semantic meaning, allowing retrieval of documents that may not match keywords exactly. +3. **Domain Adaptation**: Can be fine-tuned on specific data to enhance performance in particular industries. 4. **Personalization**: Tailors document retrieval based on user preferences and historical interactions. -### Implementation -The next section will cover how to implement reranking in ZenML and integrate it into the RAG inference pipeline. +### Next Steps +The documentation will cover how to implement reranking in ZenML and integrate it into the RAG inference pipeline. + +For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === -**Summary: Adding Reranking to RAG Inference in ZenML** +### Summary: Adding Reranking to RAG Inference in ZenML -Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. +**Overview**: Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. -Previously, the workflow was established, covering data ingestion, preprocessing, embeddings generation, and retrieval, along with basic evaluation metrics for performance assessment. Reranking is an optional enhancement that can boost the relevance and quality of retrieved documents, leading to improved LLM responses. +**Key Points**: +- Rerankers are optional but can significantly improve the relevance and quality of retrieved documents, leading to better LLM responses. +- The workflow includes data ingestion, preprocessing, embeddings generation, retrieval, and evaluation metrics. +- Reranking is an additional step that optimizes the existing setup. -**Key Points:** -- Rerankers reorder retrieved documents based on additional features/scores. -- Integration of reranking is optional but beneficial for retrieval performance. -- Enhances the overall effectiveness of the LLM's responses. +**Visual Aid**: A workflow diagram illustrates the reranking process within the overall retrieval system. -![Reranking Workflow](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== @@ -2602,26 +2654,19 @@ Previously, the workflow was established, covering data ingestion, preprocessing ### Summary of Generation Evaluation in RAG Pipeline -#### Overview -The generation component of a Retrieval-Augmented Generation (RAG) pipeline generates answers based on retrieved context. Evaluating this component is subjective and lacks precise metrics, but several methods can be employed. +**Overview**: This documentation outlines methods to evaluate the generation component of a Retrieval-Augmented Generation (RAG) pipeline, focusing on generating answers based on retrieved context. Evaluation is subjective and involves both handcrafted tests and automated assessments using another LLM. #### Handcrafted Evaluation Tests -- Create examples to verify that generated outputs include or exclude specific terms based on known context. -- For instance, when asking about supported orchestrators, ensure terms like "Airflow" and "Kubeflow" are included, while "Flyte" and "Prefect" are excluded. -- Start with simple tests and expand as needed, focusing on common mistakes observed in outputs. - -**Example Tables:** -- **Bad Answers:** - | Question | Bad Words | - |----------|-----------| - | What orchestrators does ZenML support? | AWS Step Functions, Flyte, Prefect, Dagster | - -- **Good Responses:** - | Question | Good Words | - |----------|------------| - | What are the supported orchestrators in ZenML? | Kubeflow, Airflow | +- Create examples to verify generated outputs include or exclude specific terms based on known correct or incorrect responses. +- Example tests include checking if supported orchestrators like "Airflow" and "Kubeflow" are present while excluding unsupported ones like "Flyte" and "Prefect." +- A starter set of tests is available [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py#L28-L55). -**Testing Code Example:** +**Test Tables**: +- **Bad Answers**: Questions that should not include certain terms. +- **Bad Immediate Responses**: Questions that should not yield certain immediate responses. +- **Good Responses**: Questions that should include specific terms. + +**Example Code for Testing Bad Words**: ```python class TestResult(BaseModel): success: bool @@ -2632,27 +2677,33 @@ class TestResult(BaseModel): def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: question = item["question"] bad_words = item["bad_words"] - response = process_input_with_retrieval(question, n_items_retrieved) + response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) for word in bad_words: if word in response: return TestResult(success=False, question=question, keyword=word, response=response) return TestResult(success=True, question=question, response=response) ``` -#### End-to-End Evaluation -Combine tests to assess the generation component: +**Running Tests**: ```python -@step -def e2e_evaluation() -> Tuple[float, float, float]: - failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) - failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) - return failure_rate_bad_answers, failure_rate_good_responses +def run_tests(test_data: list, test_function: Callable) -> float: + failures = 0 + total_tests = len(test_data) + for item in test_data: + test_result = test_function(item) + if not test_result.success: + logging.error(f"Test failed for question: '{test_result.question}'. Found word: '{test_result.keyword}'. Response: '{test_result.response}'") + failures += 1 + failure_rate = (failures / total_tests) * 100 + logging.info(f"Total tests: {total_tests}. Failures: {failures}. Failure rate: {failure_rate}%") + return round(failure_rate, 2) ``` #### Automated Evaluation Using Another LLM -Utilize a separate LLM to evaluate the output quality on a scale from 1 to 5 across categories like toxicity, faithfulness, helpfulness, and relevance. +- Use a separate LLM to assess the quality of generated outputs on a scale of 1 to 5 for categories like toxicity, faithfulness, helpfulness, and relevance. +- A Pydantic model is set up to validate scores. -**Pydantic Model for Results:** +**Pydantic Model**: ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) @@ -2661,90 +2712,100 @@ class LLMJudgedTestResult(BaseModel): relevance: conint(ge=1, le=5) ``` -**LLM Judged Test Function:** +**Example Code for LLM Judged Test**: ```python def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: - response = process_input_with_retrieval(question, n_items_retrieved) - prompt = f"Analyze the text for scores on toxicity, faithfulness, helpfulness, and relevance." + response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) + prompt = f"Analyze the following text and provide scores for toxicity, faithfulness, helpfulness, and relevance. **Text:** {response} **Context:** {context} **Output format:** {{\"toxicity\": int, \"faithfulness\": int, \"helpfulness\": int, \"relevance\": int}}" response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) return LLMJudgedTestResult(**json.loads(response["choices"][0]["message"]["content"].strip())) ``` -**Running Tests:** +**Running LLM Judged Tests**: ```python def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) - # Accumulate scores and calculate averages - return (average_toxicity_score, average_faithfulness_score, average_helpfulness_score, average_relevance_score) + total_scores = {'toxicity': 0, 'faithfulness': 0, 'helpfulness': 0, 'relevance': 0} + total_tests = len(dataset) + + for item in dataset: + question = item["generated_questions"][0] + context = item["page_content"] + result = test_function(question, context) + total_scores['toxicity'] += result.toxicity + total_scores['faithfulness'] += result.faithfulness + total_scores['helpfulness'] += result.helpfulness + total_scores['relevance'] += result.relevance + + return tuple(round(total_scores[key] / total_tests, 3) for key in total_scores) ``` -#### Considerations for Improvement -- Implement retries for JSON output errors. -- Utilize OpenAI's JSON mode for consistent output formatting. -- Explore batch processing and increase sample sizes for more robust evaluations. -- Consider integrating frameworks like `ragas`, `trulens`, and others for enhanced evaluation capabilities. +#### Additional Notes +- The evaluation process can be improved by implementing retries for JSON outputs, using OpenAI's JSON mode, batch processing, and increasing sample size. +- Consider using frameworks like `ragas`, `trulens`, DeepEval, and UpTrain for more sophisticated evaluations. +- The evaluation of both retrieval and generation components allows tracking improvements in the RAG pipeline. -### Conclusion -This evaluation framework for the generation component of a RAG pipeline provides a structured approach to assess output quality, enabling continuous improvement and optimization tailored to specific use cases. For complete code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py). +For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the `eval_e2e.py` file. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md === -### Evaluation in 65 Lines of Code +### Summary of RAG Evaluation Implementation -This section demonstrates how to evaluate a Retrieval-Augmented Generation (RAG) pipeline using 65 lines of code, building on a previous example. The complete code is available in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation relies on functions from the earlier RAG pipeline. +This documentation outlines how to implement evaluation for a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of Python code. It builds upon a previous example of a basic RAG pipeline. The full code is available in the project repository. -#### Evaluation Data -The evaluation data consists of questions and their expected answers: -```python -eval_data = [ - {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, - {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones..."}, - {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, -] -``` +#### Key Components -#### Evaluation Functions -1. **Retrieval Evaluation**: Checks if any retrieved chunks contain words from the expected answer. +1. **Evaluation Data**: A list of questions and their expected answers is defined for testing the RAG pipeline. + ```python - def evaluate_retrieval(question, expected_answer, corpus, top_n=2): - relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) - return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) + eval_data = [ + {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, + {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones..."}, + {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, + ] ``` -2. **Generation Evaluation**: Uses OpenAI's API to assess the relevance and accuracy of the generated answer. - ```python - def evaluate_generation(question, expected_answer, generated_answer): - client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) - chat_completion = client.chat.completions.create( - messages=[{"role": "system", "content": "You are an evaluation judge..."}, - {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}"}], - model="gpt-3.5-turbo" - ) - return chat_completion.choices[0].message.content.strip().lower() == "yes" - ``` +2. **Evaluation Functions**: + - **Retrieval Evaluation**: Checks if any words from the expected answer appear in the retrieved chunks. + + ```python + def evaluate_retrieval(question, expected_answer, corpus, top_n=2): + relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) + return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) + ``` -#### Evaluation Process -The evaluation iterates through the `eval_data`, calculating scores for both retrieval and generation: -```python -retrieval_scores = [] -generation_scores = [] + - **Generation Evaluation**: Uses OpenAI's API to assess the relevance and accuracy of the generated answer. + + ```python + def evaluate_generation(question, expected_answer, generated_answer): + client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) + chat_completion = client.chat.completions.create( + messages=[{"role": "system", "content": "You are an evaluation judge..."}, + {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}"}], + model="gpt-3.5-turbo", + ) + return chat_completion.choices[0].message.content.strip().lower() == "yes" + ``` + +3. **Scoring**: The code iterates through the evaluation data, calculates retrieval and generation scores, and computes their accuracies. + + ```python + retrieval_scores = [evaluate_retrieval(item["question"], item["expected_answer"], corpus) for item in eval_data] + generation_scores = [evaluate_generation(item["question"], item["expected_answer"], answer_question(item["question"], corpus)) for item in eval_data] -for item in eval_data: - retrieval_scores.append(evaluate_retrieval(item["question"], item["expected_answer"], corpus)) - generated_answer = answer_question(item["question"], corpus) - generation_scores.append(evaluate_generation(item["question"], item["expected_answer"], generated_answer)) + retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) + generation_accuracy = sum(generation_scores) / len(generation_scores) -retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) -generation_accuracy = sum(generation_scores) / len(generation_scores) + print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") + print(f"Generation Accuracy: {generation_accuracy:.2f}") + ``` -print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") -print(f"Generation Accuracy: {generation_accuracy:.2f}") -``` +#### Results +The example demonstrates achieving 100% accuracy for both retrieval and generation components. Further sections will elaborate on more sophisticated implementations of RAG evaluation. -#### Summary -The example demonstrates how to evaluate a RAG pipeline, achieving 100% accuracy for both retrieval and generation. Future sections will provide more advanced implementations of RAG evaluation. +For the complete code, refer to the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). ================================================== @@ -2752,159 +2813,153 @@ The example demonstrates how to evaluate a RAG pipeline, achieving 100% accuracy ### Summary of RAG System Evaluation Documentation -#### Overview -This documentation outlines how to evaluate the performance of a Retrieval-Augmented Generation (RAG) system, emphasizing the separation of embedding generation and evaluation processes. +This documentation provides guidance on evaluating the performance of a Retrieval-Augmented Generation (RAG) system. It emphasizes the importance of separating the evaluation process from the main pipeline that generates embeddings, allowing for better management of concerns. + +#### Key Points: -#### Evaluation Pipeline -- The evaluation is structured as a separate pipeline that runs after the main pipeline, which generates and populates embeddings. This separation is beneficial for managing concerns effectively. -- Depending on the use case, evaluations can be integrated into the main pipeline to act as a gating mechanism for embedding quality. -- For development, consider using a local LLM judge for quicker iterations, then switch to a cloud LLM (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) for comprehensive evaluations. +1. **Evaluation Pipeline**: + - The evaluation is structured as a separate pipeline that runs after the main embedding generation. This separation is a best practice. + - Depending on the use case, evaluations can also be integrated into the main pipeline to serve as a gating mechanism for production readiness. -#### Automated Evaluation -- Automation can streamline the evaluation process but does not eliminate the need for human oversight. The LLM judge is costly and time-consuming, necessitating human review of results to ensure expected performance. +2. **Local vs. Cloud LLM Judge**: + - For development, consider using a local LLM judge for quicker iterations. + - Use cloud LLMs (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) for comprehensive evaluations to manage costs effectively. -#### Evaluation Frequency -- The frequency and depth of evaluations should be tailored to the specific project constraints. Balance the cost of evaluations with the need for rapid iteration. -- Quick and inexpensive tests (e.g., retrieval system tests) should be run frequently, while more costly evaluations (e.g., LLM judge) can be conducted less often. +3. **Human Review**: + - Automated evaluations save time but do not replace the need for human oversight. Results from the LLM judge require careful review to ensure the RAG system performs as expected. -#### Next Steps -- The documentation suggests adding a reranker to enhance retrieval performance without retraining embeddings. +4. **Evaluation Frequency**: + - The frequency and depth of evaluations should balance cost and the need for rapid iteration. + - Quick tests (e.g., retrieval system tests) can be run frequently, while more expensive tests (e.g., LLM judge) should be less frequent. + +5. **Next Steps**: + - The documentation suggests adding a reranker to enhance retrieval performance without retraining embeddings. + +#### Practical Implementation: -#### Practical Implementation To run the evaluation pipeline: + 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` + 2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions. + 3. Execute the evaluation pipeline: ```bash python run.py --evaluation ``` -This will output results to the console, allowing inspection of progress, logs, and results in the dashboard. - -================================================== - -=== File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md === - -### Retrieval Evaluation in RAG Pipeline -The retrieval component in a RAG (Retrieval-Augmented Generation) pipeline is crucial for finding relevant documents to support the generation component. This section outlines methods to evaluate the performance of the retrieval component, focusing on the accuracy of semantic searches. +Results will be output to the console, and progress can be monitored via the dashboard. -#### Manual Evaluation with Handcrafted Queries -Manual evaluation involves creating specific queries to check if the retrieval component can find the relevant documents. This method, while time-consuming, helps identify edge cases. Example queries include: +This concise summary retains critical technical details while eliminating redundancy, ensuring clarity for further inquiries. -| Question | URL Ending | -|----------|------------| -| How do I get going with the Label Studio integration? | stacks-and-components/component-guide/annotators/label-studio | -| How can I write my own custom materializer? | user-guide/advanced-guide/data-management/handle-custom-data-types | -| How do I generate embeddings in a RAG pipeline with ZenML? | user-guide/llmops-guide/rag-with-zenml/embeddings-generation | -| How do I use failure hooks in my ZenML pipeline? | user-guide/advanced-guide/pipelining-features/use-failure-success-hooks | -| Can I deploy ZenML self-hosted with Helm? | deploying-zenml/zenml-self-hosted/deploy-with-helm | +================================================== -The retrieval process involves encoding the query and querying a PostgreSQL database for similar vectors. The following code implements this: +=== File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md === -```python -def query_similar_docs(question: str, url_ending: str) -> tuple: - embedded_question = get_embeddings(question) - top_similar_docs_urls = get_topn_similar_docs(embedded_question, get_db_conn(), n=5, only_urls=True) - return (question, url_ending, [url[0] for url in top_similar_docs_urls]) +### Summary of Retrieval Evaluation in RAG Pipeline -def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: - failures = sum(1 for pair in question_doc_pairs if pair["url_ending"] not in query_similar_docs(pair["question"], pair["url_ending"])[2]) - return round((failures / len(question_doc_pairs)) * 100, 2) -``` +The retrieval component of a Retrieval-Augmented Generation (RAG) pipeline is crucial for finding relevant documents based on incoming queries. This documentation outlines methods to evaluate the performance of this component, focusing on the accuracy of semantic search. -#### Automated Evaluation with Synthetic Queries -For broader evaluation, synthetic queries can be generated using an LLM. Each document chunk's text is passed to the LLM to create relevant questions. The generated questions are then used to evaluate the retrieval component. +#### Key Evaluation Methods -Example question generation code: +1. **Manual Evaluation with Handcrafted Queries**: + - Create specific queries to check if the retrieval component can retrieve known relevant documents. + - Example queries include: + - "How do I get going with the Label Studio integration?" + - "How can I write my own custom materializer?" + - The retrieval process involves encoding the query into a vector and querying a PostgreSQL database for similar vectors. -```python -from typing import List -from litellm import completion -from zenml import step + **Code Example**: + ```python + def query_similar_docs(question: str, url_ending: str) -> tuple: + embedded_question = get_embeddings(question) + top_similar_docs_urls = get_topn_similar_docs(embedded_question, db_conn, n=5, only_urls=True) + urls = [url[0] for url in top_similar_docs_urls] + return (question, url_ending, urls) -def generate_question(chunk: str, local: bool = False) -> str: - model = "ollama/mixtral" if local else "gpt-3.5-turbo" - response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) - return response.choices[0].message.content + def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: + failures = sum(1 for pair in question_doc_pairs if all(pair["url_ending"] not in url for url in query_similar_docs(pair["question"], pair["url_ending"])[2])) + return round((failures / len(question_doc_pairs)) * 100, 2) + ``` -@step -def generate_questions_from_chunks(docs_with_embeddings: List[Document], local: bool = False) -> List[Document]: - for doc in docs_with_embeddings: - doc.generated_questions = [generate_question(doc.page_content, local)] - return docs_with_embeddings -``` +2. **Automated Evaluation with Synthetic Queries**: + - Use a language model (LLM) to generate questions based on document chunks. + - The generated questions are then evaluated against the retrieval component to check if the original document URLs appear in the top results. -Once questions are generated, they can be evaluated against the retrieval component: + **Code Example**: + ```python + def generate_question(chunk: str, local: bool = False) -> str: + model = LOCAL_MODEL if local else "gpt-3.5-turbo" + response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) + return response.choices[0].message.content -```python -@step -def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: - dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) - failures = sum(1 for item in dataset if item["filename"].split("/")[-1] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2]) - return round((failures / len(dataset)) * 100, 2) -``` + @step + def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: + dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) + failures = sum(1 for item in dataset if all(item["filename"].split("/")[-1] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2])) + return round((failures / len(dataset)) * 100, 2) + ``` -#### Performance Insights -Initial tests showed a 20% failure rate with handcrafted queries and 16% with synthetic queries, indicating room for improvement. Suggested enhancements include: +#### Results and Insights +- Initial tests showed a 20% failure rate with handcrafted queries and 16% with synthetic queries, indicating room for improvement. +- Suggested improvements include: + - Generating more diverse questions. + - Using semantic similarity metrics for nuanced evaluation. + - Comparative evaluation of different retrieval techniques. + - Conducting error analysis to identify patterns in failures. -- **Diverse Question Generation**: Use varied prompts to generate different question types. -- **Semantic Similarity Metrics**: Implement metrics like cosine similarity for nuanced performance evaluation. -- **Comparative Evaluation**: Test different retrieval methods and models. -- **Error Analysis**: Investigate failure cases for targeted improvements. +#### Conclusion +The evaluation process for the retrieval component is vital for improving the RAG pipeline's performance. Both manual and automated methods provide insights into the effectiveness of the retrieval system, guiding iterative enhancements. Future evaluations will also focus on the generation component to ensure the overall quality of the system's outputs. -The evaluation process provides a baseline understanding of retrieval performance, guiding future enhancements. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [eval_retrieval.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). +For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/README.md === -### Evaluation and Metrics for RAG Pipeline +### Summary of Evaluation and Metrics for RAG Pipeline -This section focuses on evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating a RAG pipeline is essential for understanding its effectiveness and identifying improvement areas. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to the subjective nature of generated text. Therefore, a holistic evaluation approach is necessary. +This documentation discusses evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating RAG pipelines is essential for performance assessment and improvement identification. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to their subjective nature. A holistic evaluation approach is necessary since a RAG pipeline encompasses more than just a model. #### Key Evaluation Areas: 1. **Retrieval Evaluation**: Assessing the relevance of retrieved documents or document chunks to the query. -2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for the specific use case. +2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for specific use cases. #### Evaluation Considerations: -- The evaluation criteria depend on the specific use case and acceptable error tolerance. For example, in a user-facing chatbot, consider: +- The evaluation metrics depend on the specific use case and acceptable error levels. For example, a user-facing chatbot may require: - Relevance of retrieved documents. - Coherence and helpfulness of generated answers. - - Presence of harmful language (e.g., hate speech). + - Absence of hate speech or toxic language. -The generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the system's final output. +The generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the entire system output. -#### Practical Guidance: -- It's advisable to establish a baseline by evaluating a raw LLM model (without RAG components) before comparing it to the RAG pipeline's performance. This helps gauge the added value of retrieval and generation components. +#### Best Practices: +- In production, establish a baseline by evaluating a raw LLM model (without RAG components) and compare it to the RAG pipeline performance to gauge the added value of retrieval and generation components. #### Code Example: -A high-level code example demonstrating the two main evaluation areas is available, followed by detailed sections on each evaluation area and practical guidance on execution and result analysis. +A high-level code example demonstrates the two main evaluation areas, with further sections providing detailed guidance on practical evaluation methods and result interpretation. -For further details, refer to the sections on [Retrieval Evaluation](retrieval.md) and [Generation Evaluation](generation.md). +For the latest documentation, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/cloud-guide/cloud-guide.md === -### Cloud Guide Summary +### ZenML Cloud Guide Summary -This section provides straightforward instructions for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the configuration of tools and infrastructure necessary for running pipelines. ZenML acts as a translation layer, enabling code execution across different stacks. +This section provides guidance on connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is a configuration of tools and infrastructure for running pipelines. ZenML acts as a translation layer, enabling code execution across different stacks. **Key Points:** -- The guide focuses on **registering** a stack, assuming the required resources for pipeline execution are already provisioned. -- To provision infrastructure, options include: - - Manual provisioning - - In-browser stack deployment wizard - - Stack registration wizard - - ZenML Terraform modules - -**Visual Aid:** -- An image illustrates ZenML's role in facilitating code execution across stacks. +- **Stack Registration:** This guide focuses on registering a stack, assuming the necessary resources for running pipelines are already provisioned. +- **Provisioning Infrastructure:** You can provision infrastructure manually or use: + - **In-browser stack deployment wizard** + - **Stack registration wizard** + - **ZenML Terraform modules** -This guide does not cover the provisioning process itself but emphasizes the registration of pre-provisioned stacks. +For further details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== @@ -2912,60 +2967,68 @@ This guide does not cover the provisioning process itself but emphasizes the reg ### Community & Content Overview -The ZenML community offers various ways to connect with the development team and enhance understanding of the framework. +ZenML offers various channels for community engagement and support, enhancing understanding of the framework. #### Slack Channel -- **[Slack channel](https://zenml.io/slack)**: Main hub for community interaction, support, and sharing projects. Many questions may already have answers here. +- **Link**: [ZenML Slack](https://zenml.io/slack) +- Main hub for community interaction, support, and sharing projects. Many questions are often answered here. #### Social Media -- **[LinkedIn](https://www.linkedin.com/company/zenml)** and **[Twitter](https://twitter.com/zenml_io)**: Follow for updates on releases, events, and MLOps. Engagement through comments and shares is encouraged. +- **LinkedIn**: [ZenML LinkedIn](https://www.linkedin.com/company/zenml) +- **Twitter**: [ZenML Twitter](https://twitter.com/zenml_io) +- Follow for updates on releases and MLOps. Engagement through comments and shares is encouraged. #### YouTube Channel -- **[YouTube channel](https://www.youtube.com/c/ZenML)**: Contains video tutorials and workshops for visual learners. +- **Link**: [ZenML YouTube](https://www.youtube.com/c/ZenML) +- Offers video tutorials and workshops for visual learners. #### Public Roadmap -- **[Public roadmap](https://zenml.io/roadmap)**: Community feedback shapes ZenML's development. Users can suggest and vote on feature ideas. +- **Link**: [ZenML Roadmap](https://zenml.io/roadmap) +- Community feedback shapes development. Users can suggest and prioritize features. #### Blog -- **[Blog](https://zenml.io/blog/)**: Articles from the team covering implementation processes, new features, and insights. +- **Link**: [ZenML Blog](https://zenml.io/blog/) +- Articles on tool implementation, new features, and insights from the team. #### Podcast -- **[Podcast](https://podcast.zenml.io/)**: Features interviews and discussions on machine learning, deep learning, and MLOps. +- **Link**: [ZenML Podcast](https://podcast.zenml.io/) +- Features discussions with industry leaders on machine learning and MLOps. #### Newsletter -- **[Newsletter](https://zenml.io/newsletter-signup)**: Subscribe for updates on open-source tooling and ZenML news. +- **Link**: [ZenML Newsletter](https://zenml.io/newsletter-signup) +- Subscribe for updates on open-source tooling and ZenML news. ================================================== === File: docs/book/reference/how-do-i.md === -# ZenML Documentation Summary +# How do I...? **Last Updated**: December 13, 2023 -## Common Questions +### Common Questions: -- **Contributing to ZenML**: Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for submitting small features or bug fixes via pull requests. For larger contributions, discuss plans on [Slack](https://zenml.io/slack/) or create an [issue](https://github.com/zenml-io/zenml/issues/new/choose). +- **Contribute to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small features/bug fixes, open a pull request. For larger contributions, consider [posting in Slack](https://zenml.io/slack/) or [creating an issue](https://github.com/zenml-io/zenml/issues/new/choose). -- **Adding Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) on custom stack components. For specific types, such as orchestrators, refer to the dedicated section [here](../component-guide/orchestrators/custom.md). +- **Add Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, refer to dedicated sections, e.g., [custom orchestrators](../component-guide/orchestrators/custom.md). -- **Mitigating Dependency Clashes**: Consult the [dedicated documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md) for strategies to handle dependency issues. +- **Mitigate Dependency Clashes**: Visit our [handling dependencies documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). -- **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for each stack component explains deployment on popular cloud providers. +- **Deploy Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for stack components covers deployment on popular cloud providers. -- **Deploying ZenML on Internal Clusters**: See the documentation on [self-hosted ZenML deployments](../getting-started/deploying-zenml/README.md) for options. +- **Deploy ZenML on Internal Clusters**: Check the documentation on [self-hosted ZenML deployments](../getting-started/deploying-zenml/README.md). -- **Hyperparameter Tuning**: Refer to our [guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md) for implementation details. +- **Hyperparameter Tuning**: Refer to our [hyperparameter tuning guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). -- **Resetting ZenML Client**: Use `zenml clean` to reset your client and wipe the local metadata database. This action is destructive; consult us on [Slack](https://zenml.io/slack/) if unsure. +- **Reset ZenML Client**: Use `zenml clean` to reset your client (destructive action). Contact us on [Slack](https://zenml.io/slack/) for assistance. -- **Dynamic Pipelines and Steps**: Read the [guide on composing steps and pipelines](../user-guide/starter-guide/create-an-ml-pipeline.md) and check code examples in the hyperparameter tuning guide. +- **Dynamic Pipelines and Steps**: Read about composing steps and pipelines in our [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md) and check related code examples in the hyperparameter tuning guide. -- **Using Project Templates**: Utilize [Project Templates](../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) for quick starts. The Starter template (`starter`) is recommended for basic scaffolding. +- **Use Project Templates**: Project templates help you start quickly. The `starter` template is recommended for most use cases. -- **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, refer to the [dedicated section](../how-to/manage-zenml-server/upgrade-zenml-server.md). +- **Upgrade ZenML Client/Server**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, see the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). -- **Using Specific Stack Components**: For details on specific components, consult the [component guide](../component-guide/README.md). +- **Use Specific Stack Components**: Refer to the [component guide](../component-guide/README.md) for tips on using each integration and component with ZenML. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) @@ -2975,34 +3038,31 @@ The ZenML community offers various ways to connect with the development team and ### ZenML FAQ Summary -#### Purpose of ZenML -ZenML was developed to address challenges faced in deploying machine learning models in production, aiming to provide a simple, production-ready solution for large-scale ML pipelines. +**Overview**: ZenML was developed to address challenges in deploying machine-learning models in production, providing a simple, production-ready solution for large-scale ML pipelines. -#### ZenML vs. Orchestrators -ZenML is not merely an orchestrator like Airflow or Kubeflow; it is a framework that allows execution of ML pipelines on various orchestrators. Users can utilize standard orchestrators or create custom ones for enhanced control. +#### Key Points: -#### Tool Integration -For integration queries, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md) for instructions and sample code. The ZenML team is continuously adding new integrations, and users can suggest features via the [roadmap](https://zenml.io/roadmap) and [discussion forum](https://zenml.io/discussion). +- **Purpose**: ZenML is not just another orchestrator like Airflow or Kubeflow; it's a framework that enables running pipelines on various orchestrators while coordinating other ML system components. -#### OS Support -- **Windows**: Officially supported via WSL; limited functionality outside WSL. -- **Mac (Apple Silicon)**: Supported with the following environment variable: - ```bash - export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES - ``` - This is necessary for local server use but not required for CLI operations with a deployed server. +- **Integrations**: ZenML supports numerous tools and integrations. For details, refer to the [component guide](https://docs.zenml.io) and the [integration test code](https://github.com/zenml-io/zenml/tree/main/tests/integration/examples). Users can suggest features via the [roadmap](https://zenml.io/roadmap) and contribute to the project. -#### Custom Tool Integration -For extending ZenML with custom tools, refer to the guide on [implementing a custom stack component](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +- **Platform Support**: + - **Windows**: Officially supported via WSL. Some features may not work outside WSL. + - **Apple Silicon**: Supported with the environment variable: + ```bash + export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES + ``` + This is necessary for local server use but not for CLI operations connecting to a deployed server. + +- **Customization**: Users can extend ZenML for custom tools; a guide is available [here](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). -#### Community Contribution -To contribute, start with issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). +- **Community Contribution**: To contribute, select issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). -#### Community Engagement -Join the [Slack group](https://zenml.io/slack/) for questions and discussions with the ZenML community. +- **Community Engagement**: Join the [Slack group](https://zenml.io/slack/) for discussions and support. -#### Licensing -ZenML is licensed under the Apache License Version 2.0. More details can be found in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions will also fall under this license. +- **License**: ZenML is licensed under the Apache License Version 2.0. Full license details are in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions are also licensed under this agreement. + +For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -3010,28 +3070,28 @@ ZenML is licensed under the Apache License Version 2.0. More details can be foun # Environment Variables for ZenML -ZenML allows configuration through several pre-defined environment variables. Below are key variables with their default values and options: +ZenML allows control over its behavior through several pre-defined environment variables: ## Logging Verbosity -Control the logging level: +Set the logging level: ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` ## Disable Step Logs -To prevent storing step logs: +To prevent storing step logs (which may impact performance): ```bash export ZENML_DISABLE_STEP_LOGS_STORAGE=true # Set to true to disable ``` ## ZenML Repository Path -Specify the path for ZenML's repository: +Specify the repository path: ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` ## Analytics Opt-Out -To opt out of analytics tracking: +To opt out of usage analytics: ```bash export ZENML_ANALYTICS_OPT_IN=false ``` @@ -3043,7 +3103,7 @@ export ZENML_DEBUG=true ``` ## Active Stack -Set the active stack by its UUID: +Set the active stack by UUID: ```bash export ZENML_ACTIVE_STACK_ID= ``` @@ -3065,31 +3125,22 @@ To disable colorful logging: ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` -This setting on the client environment also affects remote orchestrators. To enable it on remote orchestrators while disabling locally, configure as follows: -```python -docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) - -@pipeline(settings={"docker": docker_settings}) -def my_pipeline() -> None: - my_step() -``` +Note: Disabling on the client environment also affects remote orchestrators. ## ZenML Global Config Path -Set the path for the global config file: +Set the global config file path: ```bash export ZENML_CONFIG_PATH=/path/to/somewhere ``` -## Server Configuration -Refer to the ZenML Server documentation for server configuration options. - ## Client Configuration Connect the ZenML Client to a server using: ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` -This is useful for CI/CD environments or containerized setups. + +For more details on server configuration, refer to the ZenML Server documentation. ================================================== @@ -3097,11 +3148,11 @@ This is useful for CI/CD environments or containerized setups. # ZenML API Reference Summary -The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local instances (using `zenml login --local`), the documentation is available at `http://127.0.0.1:8237/docs`. +The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local usage (via `zenml login --local`), the documentation is available at `http://127.0.0.1:8237/docs`. -## Accessing the API Programmatically +## Accessing the API Programmatically with a Bearer Token -To access the ZenML API programmatically, follow these steps: +To use the ZenML server API programmatically, follow these steps: 1. **Create a Service Account**: ```shell @@ -3110,7 +3161,7 @@ To access the ZenML API programmatically, follow these steps: This command generates a ``. 2. **Obtain an Access Token**: - Use the `/api/v1/login` endpoint to get an access token: + Use the `/api/v1/login` endpoint: ```shell curl -X 'POST' \ '/api/v1/login' \ @@ -3128,7 +3179,7 @@ To access the ZenML API programmatically, follow these steps: ``` 3. **Make API Requests**: - Use the access token for subsequent requests: + Use the access token in subsequent commands: ```shell curl -X 'GET' \ '/api/v1/pipelines?hydrate=false&name=training' \ @@ -3136,7 +3187,7 @@ To access the ZenML API programmatically, follow these steps: -H 'Authorization: Bearer ' ``` -This summary retains all critical steps and commands necessary for programmatic access to the ZenML API. +This summary provides essential steps and commands for accessing the ZenML API programmatically, ensuring critical information is retained while maintaining conciseness. ================================================== @@ -3144,7 +3195,7 @@ This summary retains all critical steps and commands necessary for programmatic ### ZenML Python Client Overview -The ZenML Python `Client` enables programmatic interaction with ZenML resources, such as pipelines, runs, and stacks, stored in a database within your ZenML instance. For other programming environments, ZenML resources can be accessed via REST API endpoints. +The ZenML Python `Client` enables programmatic interaction with ZenML resources, such as pipelines, runs, and stacks, stored in a database within your ZenML instance. For other programming languages, resources can be accessed via REST API endpoints. ### Usage Example @@ -3154,7 +3205,6 @@ To fetch the last 10 pipeline runs for the current stack: from zenml.client import Client client = Client() - my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, @@ -3168,29 +3218,27 @@ for pipeline_run in my_runs_on_current_stack: ### Main ZenML Resources -- **Pipelines**: Tracked pipelines. -- **Pipeline Runs**: Details of executed runs. -- **Run Templates**: Templates for running pipelines. -- **Step Runs**: Steps of pipeline runs. -- **Artifacts**: Information on artifacts from runs. -- **Schedules**: Metadata for scheduled runs. -- **Builds**: Docker images for pipelines. -- **Code Repositories**: Connected git repositories. - -#### Stacks and Authentication - -- **Stack**: Registered stacks. -- **Stack Components**: Components like orchestrators and artifact stores. -- **Flavors**: Available flavors for stack components. -- **User**: Registered users. -- **Secrets**: Authentication secrets in the ZenML Secret Store. -- **Service Connectors**: Connectors for infrastructure. +1. **Pipelines**: Tracked pipelines. +2. **Pipeline Runs**: Information on executed runs. +3. **Run Templates**: Templates for running pipelines. +4. **Step Runs**: Steps within pipeline runs. +5. **Artifacts**: Artifacts generated during runs. +6. **Schedules**: Metadata for scheduled runs. +7. **Builds**: Docker images for pipelines. +8. **Code Repositories**: Connected git repositories. + +9. **Stacks**: Registered stacks. +10. **Stack Components**: Components like orchestrators and artifact stores. +11. **Flavors**: Available stack component flavors. +12. **User**: Registered users. +13. **Secrets**: Authentication secrets. +14. **Service Connectors**: Infrastructure connection setups. ### Client Methods #### Reading and Writing Resources -**List Methods**: Retrieve lists of resources. +- **List Methods**: Retrieve lists of resources. ```python client.list_pipeline_runs( @@ -3201,20 +3249,20 @@ client.list_pipeline_runs( ) ``` -These methods return a `Page` of resources, defaulting to 50 results. You can adjust the `size` or use the `page` argument for pagination. +Returns a `Page` of resources, defaulting to 50 results. Modify page size with `size` or fetch subsequent pages with `page`. -**Get Methods**: Fetch specific resources by ID, name, or name prefix. +- **Get Methods**: Fetch specific resources by ID, name, or prefix. ```python client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # By ID client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # By Name ``` -**Create, Update, and Delete Methods**: Available for certain resources; check the Client SDK documentation for specifics. +- **Create, Update, Delete Methods**: Available for select resources. Refer to the Client SDK documentation for specifics. #### Active User and Stack -Access current user and stack information via: +Access current user and stack information: ```python client.active_user @@ -3223,13 +3271,14 @@ client.active_stack_model ### Resource Models -ZenML Client methods return **Response Models**, which are Pydantic Models ensuring data integrity. For example, `client.list_pipeline_runs` returns a `Page[PipelineRunResponseModel]`. +ZenML Client methods return **Response Models**, which are Pydantic Models ensuring data validation. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. -**Request, Update, and Filter Models** are used for server API endpoints but not for Client methods. For detailed model fields, refer to the ZenML Models SDK Documentation. +**Request, Update, and Filter Models** are used for server API endpoints but not for Client methods. For details on model fields, refer to the ZenML Models SDK Documentation. ### Important Links + - [Client SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/) -- [ZenML Models SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-models/) +- [ZenML Models SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-models/#zenml.models) ================================================== @@ -3237,186 +3286,210 @@ ZenML Client methods return **Response Models**, which are Pydantic Models ensur ### ZenML Global Settings Overview -**ZenML Global Config Directory**: The global settings for ZenML are stored in a directory specific to the operating system: +The **ZenML Global Config Directory** stores global settings for ZenML installations. Its location varies by operating system: -- **Linux**: `~/.config/zenml` -- **Mac**: `~/Library/Application Support/zenml` -- **Windows**: `C:\Users\%USERNAME%\AppData\Local\zenml` +- **Linux:** `~/.config/zenml` +- **Mac:** `~/Library/Application Support/zenml` +- **Windows:** `C:\Users\%USERNAME%\AppData\Local\zenml` -The default path can be changed using the `ZENML_CONFIG_PATH` environment variable. To check the current config directory, use: +You can override the default path using the `ZENML_CONFIG_PATH` environment variable. To retrieve the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` -**Warning**: Avoid manually altering files in the global config directory. Use CLI commands for management: +**Warning:** Avoid manually altering files in the config directory. Use CLI commands for management: + - `zenml analytics` - Manage analytics settings. -- `zenml clean` - Reset to default configuration. -- `zenml downgrade` - Match global config version to installed ZenML version. +- `zenml clean` - Reset configuration to default. +- `zenml downgrade` - Downgrade ZenML version. + +Upon first run, ZenML initializes the config directory and creates a default stack: + +``` +Initializing the ZenML global configuration version to 0.13.2 +Creating default user 'default' ... +Creating default stack for user 'default'... +``` + +#### Global Config Directory Structure -**Initialization**: The first run of ZenML creates the global config directory and initializes it with a default configuration and stack. +After initialization, the directory layout includes: -**Global Config Directory Structure**: ``` -/home/user/.config/zenml +/home/stefan/.config/zenml ├── config.yaml # Global Configuration Settings └── local_stores # Local data storage for stack components - ├── # Local Store paths + ├── # Local Store for components └── default_zen_store └── zenml.db # SQLite database for ZenML data ``` -**`config.yaml` Contents**: -```yaml -active_stack_id: ... -analytics_opt_in: true -store: - database: ... - url: ... - username: ... -user_id: -version: 0.13.2 -``` +**Key Files:** -**Local Stores**: Contains subdirectories for local stack components, such as artifact stores. +1. **config.yaml:** Stores global settings like client ID, database config, and active stack. + ```yaml + active_stack_id: ... + analytics_opt_in: true + store: + database: ... + url: ... + username: ... + user_id: d980f13e-05d1-4765-92d2-1dc7eb7addb7 + version: 0.13.2 + ``` + +2. **local_stores:** Contains subdirectories for local stack components. + +3. **zenml.db:** Default SQLite database for storing stack information. + +#### Usage Analytics + +ZenML collects anonymized usage statistics to improve the tool. Users can opt out with: -**Usage Analytics**: ZenML collects anonymized usage statistics to improve the tool. Users can opt-out with: ```bash zenml analytics opt-out ``` -**Version Mismatch Handling**: If the global configuration version exceeds the installed version, an error occurs. To downgrade the configuration, use: +Analytics are aggregated via [Segment](https://segment.com) and processed through a ZenML analytics server. + +#### Version Mismatch and Downgrading + +If you downgrade ZenML and encounter a version mismatch error: + +```shell +`The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` +``` + +To align versions, run: + ```shell zenml downgrade ``` -**Warning**: Downgrading may lead to unexpected behavior. To reset the configuration, run: +**Warning:** Downgrading may lead to unexpected behavior. To reset the configuration, use: + ```shell zenml clean -``` +``` -For further details on analytics and data privacy, users can contact ZenML support. +This command purges the local database and reinitializes the global configuration. ================================================== === File: docs/book/component-guide/integration-overview.md === -### Overview of ZenML Integrations +# ZenML Third-Party Integrations Overview -ZenML enhances MLOps pipelines by integrating with various tools across different categories, allowing users to streamline their ML workflows. Key integrations include orchestrators like [Airflow](orchestrators/airflow.md) and [Kubeflow](orchestrators/kubeflow.md), experiment trackers such as [MLflow Tracking](experiment-trackers/mlflow.md) and [Weights & Biases](experiment-trackers/wandb.md), and model deployment options like [Seldon Core](model-deployers/seldon.md). ZenML provides flexibility without vendor lock-in, enabling easy tool switching as requirements evolve. +ZenML provides integrations with various MLOps tools to enhance ML workflows by categorizing the MLOps stack and offering concrete implementations. This allows users to orchestrate pipelines with tools like [Airflow](orchestrators/airflow.md) and [Kubeflow](orchestrators/kubeflow.md), track experiments using [MLflow](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models with [Seldon Core](model-deployers/seldon.md). ZenML enables flexibility without vendor lock-in, allowing easy switching of tools as requirements evolve. -### Available Integrations - -A comprehensive list of supported integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) or in the [integrations directory on GitHub](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). - -### Installing ZenML Integrations - -To install integrations, use the command: +## Available Integrations +A comprehensive list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) or in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). +## Installing Integrations +To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` - -This command installs preferred versions via pip: - +This command installs the preferred versions via pip: ```bash pip install kubeflow== mlflow== seldon== ``` - -The `-y` flag auto-confirms installation prompts. For a full list of CLI commands, run `zenml integration --help`. +The `-y` flag auto-confirms installation prompts. For a complete list of CLI commands, run `zenml integration --help`. ### Using `uv` for Package Installation - -You can use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: - +You can utilize [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: ```bash -zenml integration install --uv kubeflow mlflow seldon +zenml integration install kubeflow --uv ``` - Ensure `uv` is installed, as this is an experimental feature. -### Upgrading ZenML Integrations - +## Upgrading Integrations To upgrade integrations, use: - ```bash zenml integration upgrade mlflow pytorch -y ``` +The `-y` flag auto-confirms upgrades. If no integrations are specified, all installed integrations will be upgraded. -The `-y` flag confirms upgrades without prompts. If no integrations are specified, all installed integrations will be upgraded. - -### Community Contributions - -ZenML is open to community contributions for new integrations. Refer to the public [roadmap](https://zenml.io/roadmap) for prioritized tools and check the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for details on contributing. +## Community Contributions +ZenML prioritizes integrations based on community needs, visible on the [public roadmap](https://zenml.io/roadmap). Contributions are welcome; refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for details. ================================================== === File: docs/book/component-guide/component-guide.md === -### Overview of MLOps Components in ZenML +# Overview of MLOps Components in ZenML -MLOps can be overwhelming due to the multitude of tools available. ZenML categorizes these tools into **Stacks and Stack Components** to clarify their roles in MLOps pipelines. Stack components are base abstractions that standardize workflows, allowing users to implement custom components or utilize built-in integrations. +ZenML categorizes MLOps tools into distinct components to clarify their roles in the pipeline. These components are standardized abstractions that streamline workflows. Users can implement custom components or utilize built-in integrations. -#### Supported Stack Components: -| **Type** | **Description** | -|-----------------------|----------------------------------------------------------| -| [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | -| [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts created by pipelines | -| [Container Registry](./container-registries/container-registries.md) | Stores container images | -| [Step Operator](./step-operators/step-operators.md) | Executes individual steps in runtime environments | -| [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | -| [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | -| [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | -| [Alerter](./alerters/alerters.md) | Sends alerts through specified channels | -| [Annotator](./annotators/annotators.md) | Labels and annotates data | -| [Data Validator](./data-validators/data-validators.md) | Validates data and models | -| [Image Builder](./image-builders/image-builders.md) | Builds container images | -| [Model Registry](./model-registries/model-registries.md) | Manages and interacts with ML models | +## Supported Stack Components -Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional as the pipeline matures. +| **Type of Stack Component** | **Description** | +|------------------------------|-----------------| +| [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | +| [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | +| [Container Registry](./container-registries/container-registries.md) | Stores container images | +| [Step Operator](./step-operators/step-operators.md) | Executes steps in specific environments | +| [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | +| [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | +| [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | +| [Alerter](./alerters/alerters.md) | Sends alerts through channels | +| [Annotator](./annotators/annotators.md) | Labels and annotates data | +| [Data Validator](./data-validators/data-validators.md) | Validates data and models | +| [Image Builder](./image-builders/image-builders.md) | Builds container images | +| [Model Registry](./model-registries/model-registries.md) | Manages ML models | + +Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional based on MLOps maturity. + +## Custom Component Flavors -#### Custom Component Flavors -Users can create custom component flavors to tailor ZenML's behavior. For more information, refer to the [guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types, such as the [custom orchestrator guide](orchestrators/custom.md). +Users can create custom component flavors to tailor ZenML's behavior. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific component type guides (e.g., [custom orchestrator guide](orchestrators/custom.md)). ================================================== === File: docs/book/component-guide/README.md === -# Overview of MLOps Components and Integrations in ZenML +# Overview of ZenML MLOps Components and Integrations -ZenML categorizes MLOps tools into distinct **Stack Components**, each serving a specific function in the MLOps pipeline. This categorization helps standardize workflows and allows users to implement or integrate these components into their pipelines. Essential stack components include: +ZenML categorizes MLOps tools into stack components to simplify their integration into your workflow. Each stack component serves a specific function in the MLOps pipeline, allowing teams to standardize their processes. The main stack components include: | **Type of Stack Component** | **Description** | |-----------------------------|------------------| | [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | -| [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | +| [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts generated by pipelines | | [Container Registry](container-registries/container-registries.md) | Stores container images | | [Data Validator](data-validators/data-validators.md) | Validates data and models | | [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | -| [Model Deployer](model-deployers/model-deployers.md) | Online model serving platforms | +| [Model Deployer](model-deployers/model-deployers.md) | Manages online model serving | | [Step Operator](step-operators/step-operators.md) | Executes pipeline steps in specific environments | -| [Alerter](alerters/alerters.md) | Sends alerts through channels | +| [Alerter](alerters/alerters.md) | Sends alerts through designated channels | | [Image Builder](image-builders/image-builders.md) | Builds container images | | [Annotator](annotators/annotators.md) | Labels and annotates data | | [Model Registry](model-registries/model-registries.md) | Manages ML models | | [Feature Store](feature-stores/feature-stores.md) | Manages data/features | -Each ZenML pipeline requires at least an orchestrator and an artifact store, with other components being optional based on the pipeline's maturity. +Every ZenML pipeline requires at least an orchestrator and an artifact store, with other components being optional based on the pipeline's maturity. ## Custom Component Flavors -Users can create custom component **flavors** to tailor ZenML's behavior. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) and specialized guides like the [custom orchestrator guide](orchestrators/custom.md). +Users can create custom component flavors to tailor ZenML's behavior. For guidance, refer to the [general guide](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific component type guides. ## Integrations -ZenML enhances MLOps processes by integrating with various tools, allowing users to orchestrate workflows with tools like [Airflow](orchestrators/airflow.md) or [Kubeflow](orchestrators/kubeflow.md), track experiments with [MLflow Tracking](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models using [Seldon Core](model-deployers/seldon.md). This integration flexibility prevents vendor lock-in and enables easy tool switching. +ZenML enhances MLOps pipelines by integrating with various tools, allowing seamless transitions between local and deployed environments. Notable integrations include: + +- **Orchestrators**: [Airflow](orchestrators/airflow.md), [Kubeflow](orchestrators/kubeflow.md) +- **Experiment Trackers**: [MLflow Tracking](experiment-trackers/mlflow.md), [Weights & Biases](experiment-trackers/wandb.md) +- **Model Deployment**: [MLflow](model-deployers/mlflow.md), [Seldon Core](model-deployers/seldon.md) + +ZenML's architecture prevents vendor lock-in, enabling easy tool swaps as requirements evolve. ### Available Integrations -A comprehensive list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) or in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). +A comprehensive list of ZenML integrations can be found on the [integrations webpage](https://zenml.io/integrations) and in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). -### Installing ZenML Integrations +### Installing Integrations To install integrations, use: @@ -3424,105 +3497,104 @@ To install integrations, use: zenml integration install kubeflow mlflow seldon -y ``` -This command installs preferred versions via pip: - -```bash -pip install kubeflow== mlflow== seldon== -``` - -The `-y` flag confirms installations without prompts. For a complete list of CLI commands, run `zenml integration --help`. +This command installs preferred versions via pip. The `-y` flag auto-confirms installations. -### Upgrade ZenML Integrations +### Upgrade Integrations -To upgrade integrations, use: +To upgrade integrations, run: ```bash zenml integration upgrade mlflow pytorch -y ``` -If no integrations are specified, all installed integrations will be upgraded. +This command upgrades specified integrations or all if none are specified. ### Community Contributions -ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and the [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more information. +ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for details. ================================================== === File: docs/book/component-guide/data-validators/evidently.md === -### Summary of Evidently Data Validator Documentation +### Summary of Evidently Integration with ZenML -**Overview:** -The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library to perform data quality, data drift, model drift, and model performance analyses. It generates reports and checks that can be used for automated corrective actions or visual interpretations. +**Evidently Overview** +Evidently is an open-source library for monitoring and debugging machine learning models through data profiling and visualization. It supports data quality, data drift, model drift, and model performance analysis, generating reports that can be used for automated corrective actions or visual interpretation. -**Usage Scenarios:** -Evidently is suitable for monitoring and debugging machine learning models through: -- **Data Quality Reports:** Analyze feature statistics and compare datasets. -- **Data Drift Reports:** Detect changes in feature distributions between datasets. -- **Target Drift Reports:** Explore changes in target functions or model predictions. -- **Model Performance Reports:** Evaluate model performance against datasets. +**Key Features** +- **Data Quality Reports**: Analyze feature statistics and behavior for a single dataset or compare two datasets. +- **Data Drift Reports**: Detect changes in feature distribution between two datasets with identical schemas. +- **Target Drift Reports**: Explore changes in target functions or model predictions. +- **Performance Reports**: Evaluate model performance using datasets with target and prediction columns. -**Deployment:** -To deploy the Evidently Data Validator: +**Deployment** +To use the Evidently Data Validator in ZenML, install the integration: ```shell zenml integration install evidently -y +``` +Register the data validator: +```shell zenml data-validator register evidently_data_validator --flavor=evidently zenml stack register custom_stack -dv evidently_data_validator ... --set ``` -**Data Profiling:** -Evidently's profiling functions require a `pandas.DataFrame` or two datasets. It generates a `Report` object without needing a model. The data must include `target` and `prediction` columns for certain reports. +**Usage** +Evidently profiling functions accept `pandas.DataFrame` datasets and generate reports. Key usage methods include: +1. **Standard Report Step**: Recommended for ease of use. +2. **Custom Step Implementation**: Offers flexibility in pipeline steps. +3. **Direct Library Use**: Full control over Evidently features. -**Using Evidently in ZenML Pipelines:** -1. **Standard Report Step:** - ```python - from zenml.integrations.evidently.steps import evidently_report_step - - text_data_report = evidently_report_step.with_options( - parameters=dict( - column_mapping=EvidentlyColumnMapping(...), - metrics=[EvidentlyMetricConfig.metric("DataQualityPreset"), ...], - download_nltk_data=True, - ), - ) - ``` +**Example of Evidently Report Step**: +```python +from zenml.integrations.evidently.steps import evidently_report_step -2. **Pipeline Example:** - ```python - @pipeline(enable_cache=False) - def text_data_report_test_pipeline(): - data = data_loader() - reference_dataset, comparison_dataset = data_splitter(data) - report, _ = text_data_report(reference_dataset=reference_dataset, comparison_dataset=comparison_dataset) - ``` +text_data_report = evidently_report_step.with_options( + parameters=dict( + column_mapping=EvidentlyColumnMapping( + target="Rating", + numerical_features=["Age", "Positive_Feedback_Count"], + categorical_features=["Division_Name", "Department_Name", "Class_Name"], + text_features=["Review_Text", "Title"], + ), + metrics=[ + EvidentlyMetricConfig.metric("DataQualityPreset"), + EvidentlyMetricConfig.metric("TextOverviewPreset", column_name="Review_Text"), + ], + download_nltk_data=True, + ), +) +``` -**Data Validation:** -Evidently can also run automated data validation tests: +**Data Validation** +Evidently can also run automated data validation tests. Similar to profiling, it can be integrated via: +1. **Standard Test Step**: Easiest method. +2. **Custom Implementation**: More flexibility. +3. **Direct Library Use**: Full control. + +**Example of Evidently Test Step**: ```python from zenml.integrations.evidently.steps import evidently_test_step text_data_test = evidently_test_step.with_options( parameters=dict( - column_mapping=EvidentlyColumnMapping(...), - tests=[EvidentlyTestConfig.test("DataQualityTestPreset"), ...], + column_mapping=EvidentlyColumnMapping( + target="Rating", + numerical_features=["Age", "Positive_Feedback_Count"], + categorical_features=["Division_Name", "Department_Name", "Class_Name"], + text_features=["Review_Text", "Title"], + ), + tests=[EvidentlyTestConfig.test("DataQualityTestPreset")], download_nltk_data=True, ), ) ``` -**Custom Steps:** -You can create custom steps for data profiling and validation: +**Direct Use of Evidently** +You can call Evidently directly in your custom steps: ```python -@step -def data_profiling(reference_dataset: pd.DataFrame, comparison_dataset: pd.DataFrame): - data_validator = EvidentlyDataValidator.get_active_data_validator() - report = data_validator.data_profiling(...) - return report.json(), HTMLString(report.show(mode="inline").data) -``` +from evidently.report import Report -**Directly Using Evidently:** -You can also directly utilize the Evidently library in custom steps: -```python @step def data_profiler(dataset: pd.DataFrame): report = Report(metrics=[metric_preset.DataQualityPreset()]) @@ -3530,8 +3602,8 @@ def data_profiler(dataset: pd.DataFrame): return report.json(), HTMLString(report.show(mode="inline").data) ``` -**Visualizing Reports:** -Reports can be visualized in the ZenML dashboard or Jupyter notebooks using: +**Visualizing Reports** +Evidently reports can be visualized in the ZenML dashboard or Jupyter notebooks using: ```python def visualize_results(pipeline_name: str, step_name: str): pipeline = Client().get_pipeline(pipeline=pipeline_name) @@ -3539,7 +3611,7 @@ def visualize_results(pipeline_name: str, step_name: str): evidently_step.visualize() ``` -This documentation provides a comprehensive guide on how to implement and utilize the Evidently Data Validator within ZenML for effective data and model monitoring. For detailed configurations and metrics, refer to the official Evidently documentation. +For further details, refer to the [Evidently documentation](https://docs.evidentlyai.com/reference/all-metrics) and [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-evidently). ================================================== @@ -3548,37 +3620,38 @@ This documentation provides a comprehensive guide on how to implement and utiliz ### Summary of Deepchecks Integration with ZenML **Overview** -Deepchecks, an open-source library, is integrated with ZenML to validate data and models in pipelines. It supports various tests for data integrity, drift, and model performance, applicable to both tabular and computer vision data. +Deepchecks is an open-source library integrated with ZenML to perform data integrity, data drift, model drift, and model performance tests on datasets and models in ZenML pipelines. The results can trigger automated corrective actions or be visualized for evaluation. -**Supported Formats** -- **Tabular Data**: `pandas.DataFrame`, models as `sklearn.base.ClassifierMixin`. -- **Computer Vision**: `torch.utils.data.dataloader.DataLoader`, models as `torch.nn.Module`. +**Use Cases** +Deepchecks is suitable for: +- **Data Integrity Checks**: Identify issues like missing values and conflicting labels. +- **Data Drift Checks**: Detect data skew by comparing target and reference datasets. +- **Model Performance Checks**: Evaluate model performance using confusion matrices and error analysis. +- **Multi-Model Performance Reports**: Summarize performance scores across multiple models. -**Key Features** -- **Data Integrity Checks**: Identify issues like missing values and mixed data types. -- **Data Drift Checks**: Compare datasets to detect feature and label drift. -- **Model Performance Checks**: Evaluate model performance using metrics like confusion matrices. -- **Multi-Model Performance Reports**: Summarize performance across multiple models. +**Supported Formats** +- **Tabular Data**: `pandas.DataFrame` for datasets and `sklearn.base.ClassifierMixin` for models. +- **Computer Vision Data**: `torch.utils.data.dataloader.DataLoader` for datasets and `torch.nn.Module` for models. **Installation** -To use Deepchecks with ZenML, install the integration: +To install the Deepchecks integration: ```shell zenml integration install deepchecks -y ``` **Registering the Data Validator** -Register the Deepchecks Data Validator in your stack: +Add the Deepchecks Data Validator to your stack: ```shell zenml data-validator register deepchecks_data_validator --flavor=deepchecks zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` -**Using Deepchecks in Pipelines** -Deepchecks validation checks are categorized based on input requirements: +**Usage in Pipelines** +Deepchecks validation checks are categorized into four types based on input requirements: 1. **Data Integrity Checks**: Single dataset input. 2. **Data Drift Checks**: Two datasets (target and reference). -3. **Model Validation Checks**: Single dataset and model input. -4. **Model Drift Checks**: Two datasets and a model input. +3. **Model Validation Checks**: Single dataset and a model. +4. **Model Drift Checks**: Two datasets and a model. **Standard Steps** ZenML provides four standard steps for Deepchecks: @@ -3592,25 +3665,30 @@ Example of a data integrity check step: from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step data_validator = deepchecks_data_integrity_check_step.with_options( - parameters=dict(dataset_kwargs=dict(label="target", cat_features=[])), + parameters=dict(dataset_kwargs=dict(label="target", cat_features=[])) ) ``` **Customizing Checks** -You can specify a custom list of checks and additional keyword arguments: +You can specify custom checks and parameters: ```python deepchecks_data_integrity_check_step( - check_list=[DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES], - dataset_kwargs=dict(label='class', cat_features=['country', 'state']), + check_list=[ + DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES, + DeepchecksDataIntegrityCheck.TABULAR_DATA_DUPLICATES, + ], + dataset=... ) ``` **Docker Configuration for Remote Orchestrators** -For remote orchestrators, extend the Docker image to include required binaries: +For remote orchestrators, extend the Docker image to include `opencv2` dependencies: ```shell ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base -RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y + +RUN apt-get update +RUN apt-get install ffmpeg libsm6 libxext6 -y ``` **Visualizing Results** @@ -3625,7 +3703,8 @@ def visualize_results(pipeline_name: str, step_name: str) -> None: step.visualize() ``` -This concise summary captures the essential technical details and functionalities of the Deepchecks integration with ZenML, ensuring that critical information is preserved while eliminating redundancy. +**Conclusion** +Deepchecks provides a robust framework for validating data and models within ZenML pipelines, enabling users to maintain data integrity and model performance with minimal configuration. For further details, refer to the official Deepchecks documentation. ================================================== @@ -3633,106 +3712,116 @@ This concise summary captures the essential technical details and functionalitie # Data Validators -Data Validators are essential tools for ensuring data quality and monitoring model performance throughout the machine learning lifecycle. They help detect issues such as data integrity, data drift, and model drift at various stages, including data ingestion, model training, and inference. +Data Validators are essential tools in machine learning (ML) for ensuring data quality and monitoring model performance throughout the ML project lifecycle. They help prevent issues that can arise from poor data quality, which can lead to unreliable model outputs. -## Key Concepts -- **Data Validators**: Optional components in ZenML stacks that generate data profiles and quality reports, stored in the [Artifact Store](../artifact-stores/artifact-stores.md). -- **Data-Centric AI**: Incorporating Data Validators supports data-centric practices in ML workflows. +## Key Features +- **Data Profiling**: Analyzes data characteristics. +- **Data Integrity Testing**: Ensures data consistency and accuracy. +- **Drift Detection**: Monitors for data and model drift during various pipeline stages (data ingestion, model training, evaluation, inference). +- **Visualization**: Generates profiles and performance reports for analysis and corrective action. -## Use Cases -1. **Early Development**: Log data quality and model performance. -2. **Regular Data Ingestion**: Conduct integrity checks to prevent downstream issues. -3. **Continuous Training**: Compare new training data and model performance against references. -4. **Batch and Online Inference**: Analyze data drift and detect discrepancies between training and serving data. +## Usage Scenarios +- **Early Development**: Log data quality and model performance. +- **Regular Data Ingestion**: Conduct integrity checks to catch issues early. +- **Continuous Training**: Compare new data and model performance against references. +- **Batch and Online Inference**: Analyze data drift and detect discrepancies between training and serving data. ## Data Validator Flavors +Different Data Validators are available, each with unique features: + | Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | -|----------------|----------|-------------|-------------|-------|--------------------| -| [Deepchecks](deepchecks.md) | Data quality, drift, performance | `pandas.DataFrame`, `torch.utils.data.dataloader.DataLoader` | `sklearn.base.ClassifierMixin`, `torch.nn.Module` | Validation tests for pipelines | `deepchecks` | -| [Evidently](evidently.md) | Data quality, drift, performance | `pandas.DataFrame` | N/A | Generates reports and visualizations | `evidently` | -| [Great Expectations](great-expectations.md) | Profiling, quality | `pandas.DataFrame` | N/A | Data testing and documentation | `great_expectations` | -| [Whylogs/WhyLabs](whylogs.md) | Data drift | `pandas.DataFrame` | N/A | Generates data profiles | `whylogs` | +|----------------|----------|------------|-------------|-------|--------------------| +| [Deepchecks](deepchecks.md) | Data quality, drift, performance | Tabular: `pandas.DataFrame`, CV: `torch.utils.data.DataLoader` | Tabular: `sklearn.base.ClassifierMixin`, CV: `torch.nn.Module` | Validation tests for pipelines | `deepchecks` | +| [Evidently](evidently.md) | Data quality, drift, performance | Tabular: `pandas.DataFrame` | N/A | Generates reports and visualizations | `evidently` | +| [Great Expectations](great-expectations.md) | Profiling, quality | Tabular: `pandas.DataFrame` | N/A | Data testing and documentation | `great_expectations` | +| [Whylogs/WhyLabs](whylogs.md) | Data drift | Tabular: `pandas.DataFrame` | N/A | Generates profiles for WhyLabs | `whylogs` | To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` -## Usage Steps +## Implementation Steps 1. **Configuration**: Add a Data Validator to your ZenML stack. -2. **Integration**: Use built-in validation steps in pipelines or directly call libraries in custom steps. -3. **Artifact Management**: Access and visualize validation artifacts in subsequent steps or fetch them later for processing. +2. **Integration**: Utilize built-in validation steps in your pipelines or use libraries directly in custom steps. +3. **Artifact Management**: Access and visualize validation artifacts in subsequent pipeline steps or retrieve them later. -Refer to specific [Data Validator flavor documentation](data-validators.md#data-validator-flavors) for detailed usage instructions. +For detailed usage instructions, refer to the specific Data Validator documentation. ================================================== === File: docs/book/component-guide/data-validators/whylogs.md === -### Summary of Whylogs/WhyLabs Profiling Documentation - -**Overview**: The whylogs/WhyLabs integration with ZenML enables the collection and visualization of data profiles, which are statistical summaries of your data. These profiles can be used for automated corrective actions and visual analysis. +### Summary of Whylogs/WhyLabs Profiling with ZenML -**Use Cases**: -- **Data Quality**: Validate inputs in models or pipelines. -- **Data Drift**: Detect shifts in model input features. -- **Model Drift**: Identify training-serving skew and performance degradation. +**Overview**: The whylogs/WhyLabs integration in ZenML allows for the collection and visualization of data statistics through data profiling. This integration uses the open-source library whylogs to create statistical summaries, known as whylogs profiles, which can be used for data validation, drift detection, and model performance monitoring. -**Deployment**: -1. Install the integration: - ```shell - zenml integration install whylogs -y - ``` -2. Register the Data Validator: - ```shell - zenml data-validator register whylogs_data_validator --flavor=whylogs - zenml stack register custom_stack -dv whylogs_data_validator ... --set - ``` -3. For WhyLabs logging, create a secret for authentication: - ```shell - zenml secret create whylabs_secret \ - --whylabs_default_org_id= \ - --whylabs_api_key= - zenml data-validator register whylogs_data_validator --flavor=whylogs \ - --authentication_secret=whylabs_secret - ``` +#### Key Features: +- **Data Quality Validation**: Ensures model inputs meet quality standards. +- **Data Drift Detection**: Identifies changes in model input features over time. +- **Model Drift Detection**: Monitors training-serving skew and performance degradation. -**Pipeline Integration**: -- Enable WhyLabs logging in custom steps: - ```python - @step(settings={"data_validator": WhylogsDataValidatorSettings(enable_whylabs=True, dataset_id="model-1")}) - def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: - X, y = datasets.load_diabetes(return_X_y=True, as_frame=True) - df = pd.merge(X, y, left_index=True, right_index=True) - profile = why.log(pandas=df).profile().view() - return df, profile - ``` +#### Installation: +To use the whylogs Data Validator, install the integration: +```shell +zenml integration install whylogs -y +``` -**Using Whylogs**: -- Three methods to utilize whylogs: - 1. **Standard Step**: Use `WhylogsProfilerStep` for ease of use. - 2. **Data Validator Methods**: Call methods in custom steps for flexibility. - 3. **Direct Library Use**: Leverage the whylogs library directly for full control. +#### Basic Setup: +Register the whylogs Data Validator: +```shell +zenml data-validator register whylogs_data_validator --flavor=whylogs +zenml stack register custom_stack -dv whylogs_data_validator ... --set +``` -**Example of Standard Step**: -```python -from zenml.integrations.whylogs.steps import get_whylogs_profiler_step +For WhyLabs logging capabilities, create a ZenML Secret for authentication: +```shell +zenml secret create whylabs_secret \ + --whylabs_default_org_id= \ + --whylabs_api_key= -train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") +zenml data-validator register whylogs_data_validator --flavor=whylogs \ + --authentication_secret=whylabs_secret ``` -**Data Validator Implementation**: +#### Custom Pipeline Steps: +To enable WhyLabs logging in custom steps, set `upload_to_whylabs` to `True`: ```python -@step(settings={"data_validator": WhylogsDataValidatorSettings(enable_whylabs=True, dataset_id="")}) -def data_profiler(dataset: pd.DataFrame) -> DatasetProfileView: - data_validator = WhylogsDataValidator.get_active_data_validator() - profile = data_validator.data_profiling(dataset) - data_validator.upload_profile_view(profile) - return profile +@step( + settings={ + "data_validator": WhylogsDataValidatorSettings( + enable_whylabs=True, dataset_id="model-1" + ) + } +) +def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: + X, y = datasets.load_diabetes(return_X_y=True, as_frame=True) + df = pd.merge(X, y, left_index=True, right_index=True) + profile = why.log(pandas=df).profile().view() + return df, profile ``` -**Visualizing Profiles**: -- View profiles in the ZenML dashboard or use Jupyter notebooks: +#### Using Whylogs: +1. **Standard Step**: Use `WhylogsProfilerStep` for basic profiling. + ```python + from zenml.integrations.whylogs.steps import get_whylogs_profiler_step + train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") + ``` + +2. **Custom Data Validator**: Call methods from `WhylogsDataValidator` directly. + ```python + data_validator = WhylogsDataValidator.get_active_data_validator() + profile = data_validator.data_profiling(dataset) + ``` + +3. **Direct Library Use**: Utilize whylogs directly in custom steps. + ```python + results = why.log(dataset) + profile = results.profile() + ``` + +#### Visualizing Profiles: +Profiles can be visualized in the ZenML dashboard or in Jupyter notebooks using: ```python def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None) -> None: pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") @@ -3740,33 +3829,26 @@ def visualize_statistics(step_name: str, reference_step_name: Optional[str] = No whylogs_step.visualize() ``` -This summary captures the essential technical details and key points from the documentation, enabling another LLM to answer questions effectively. +### Conclusion: +The whylogs integration in ZenML provides a robust framework for data profiling, enabling users to monitor data quality, detect drift, and visualize statistics effectively. For detailed usage, refer to the official [whylogs documentation](https://whylogs.readthedocs.io/en/latest/index.html). ================================================== === File: docs/book/component-guide/data-validators/great-expectations.md === -### Great Expectations Integration with ZenML +### Summary of Great Expectations Integration with ZenML -**Overview**: Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to implement data validation in pipelines using the Great Expectations Data Validator. +**Overview**: Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to run data validation in pipelines using `pandas.DataFrame`. -#### Key Features: -- **Data Profiling**: Automatically generates validation rules (Expectations) from dataset properties. -- **Data Quality**: Validates datasets against predefined or inferred Expectations. +**Key Features**: +- **Data Profiling**: Automatically generates validation rules (Expectations) from input datasets. +- **Data Quality**: Runs predefined or inferred validation rules against datasets. - **Data Docs**: Generates human-readable documentation of validation rules and results. -#### When to Use: -Utilize the Great Expectations Data Validator when needing automated data validation features for `pandas.DataFrame` datasets in ZenML pipelines. - -#### Installation: -To install the Great Expectations integration: -```shell -zenml integration install great_expectations -y -``` - -#### Deployment Options: -1. **Let ZenML Manage Configuration**: ZenML initializes and manages Great Expectations configuration, storing Expectation Suites and Validation Results in the ZenML Artifact Store. +**Deployment Options**: +1. **ZenML Managed Configuration**: ZenML initializes and manages Great Expectations configuration. Expectation Suites and Validation Results are stored in the ZenML Artifact Store. ```shell + zenml integration install great_expectations -y zenml data-validator register ge_data_validator --flavor=great_expectations zenml stack register custom_stack -dv ge_data_validator ... --set ``` @@ -3781,31 +3863,35 @@ zenml integration install great_expectations -y zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml ``` -#### Advanced Configuration: -- `configure_zenml_stores`: Automatically updates Great Expectations configuration to use ZenML Artifact Store. -- `configure_local_docs`: Sets up a local Data Docs site for visualization. +**Advanced Configuration**: +- `configure_zenml_stores`: Automatically updates configuration to use ZenML Artifact Store. +- `configure_local_docs`: Configures a local Data Docs site for visualization. -#### Usage in Pipelines: +**Usage in Pipelines**: - **Data Profiler Step**: Automatically generates an Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step ge_profiler_step = great_expectations_profiler_step.with_options( - parameters={"expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df"} + parameters={ + "expectation_suite_name": "steel_plates_suite", + "data_asset_name": "steel_plates_train_df", + } ) ``` - -- **Data Validator Step**: Validates a dataset against an existing Expectation Suite. +- **Data Validator Step**: Validates datasets against an Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step ge_validator_step = great_expectations_validator_step.with_options( - parameters={"expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df"} + parameters={ + "expectation_suite_name": "steel_plates_suite", + "data_asset_name": "steel_plates_train_df", + } ) ``` -#### Direct Use of Great Expectations: -You can directly interact with Great Expectations in custom steps, using the ZenML-managed Data Context: +**Direct Usage of Great Expectations**: Users can directly interact with the Great Expectations library while leveraging ZenML's serialization and versioning features. ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @@ -3813,15 +3899,14 @@ from zenml.integrations.great_expectations.data_validators import GreatExpectati @step def create_custom_expectation_suite() -> ExpectationSuite: context = GreatExpectationsDataValidator.get_data_context() - suite = context.create_expectation_suite(expectation_suite_name="custom_suite") - # Add expectations and save + suite = context.create_expectation_suite("custom_suite") + # Add expectations... context.save_expectation_suite(suite) context.build_data_docs() return suite ``` -#### Visualization: -Results can be visualized in the ZenML dashboard or using the `artifact.visualize()` method in Jupyter notebooks: +**Visualization**: Results can be visualized in the ZenML dashboard or via Jupyter notebooks using the `artifact.visualize()` method. ```python from zenml.client import Client @@ -3832,93 +3917,93 @@ def visualize_results(pipeline_name: str, step_name: str) -> None: validation_step.visualize() ``` -This integration provides a robust framework for ensuring data quality and maintaining documentation within data pipelines. +This summary encapsulates the essential technical details and usage instructions for integrating Great Expectations with ZenML, enabling effective data quality checks in pipelines. ================================================== === File: docs/book/component-guide/data-validators/custom.md === -### Developing a Custom Data Validator in ZenML +### Custom Data Validator Development in ZenML -#### Overview -To create a custom Data Validator in ZenML, it's recommended to first review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for Data Validators is in progress, and extensions are not currently recommended. You can choose from existing Data Validator flavors or implement your own, but be prepared for potential refactoring when the base abstraction is released. - -#### Steps to Build a Custom Data Validator -1. **Create a Class**: Inherit from the `BaseDataValidator` class and override necessary abstract methods based on your chosen library/service. -2. **Configuration Class**: If configuration is needed, inherit from `BaseDataValidatorConfig`. -3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to integrate both classes. -4. **Provide Standard Steps**: Optionally, include standard steps for easy integration into pipelines. +**Overview**: ZenML allows for the creation of custom Data Validators to integrate various data logging and validation libraries. However, the base abstraction for Data Validators is currently in progress, and extending them is not recommended until updates are complete. -#### Registration -Register your custom Data Validator flavor using the CLI with dot notation: +**Steps to Create a Custom Data Validator**: +1. **Class Inheritance**: Create a class that inherits from `BaseDataValidator` and override necessary abstract methods based on the library/service you want to integrate. +2. **Configuration Class**: If configuration is needed, create a class that inherits from `BaseDataValidatorConfig`. +3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to combine the validator and configuration classes. +4. **Pipeline Integration**: Optionally, provide standard steps for easy integration into pipelines. +**Registration**: Register the custom Data Validator flavor using the CLI with the following command: ```shell zenml data-validator flavor register ``` - -For example, if your flavor class is in `flavors/my_flavor.py`: - +For example: ```shell zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` -Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - -#### Verification -After registration, verify the new flavor is available: - +**Best Practices**: Initialize ZenML at the root of your repository to ensure proper resolution of the flavor class. Use: ```shell zenml data-validator flavor list ``` +to verify the registration. -#### Important Notes -- The **CustomDataValidatorFlavor** is used during flavor creation via CLI. -- The **CustomDataValidatorConfig** is utilized during stack component registration to validate user inputs. -- The **CustomDataValidator** is invoked when the component is in use, allowing separation of flavor configuration from implementation. +**Key Classes**: +- **CustomDataValidatorFlavor**: Used upon creation of the custom flavor. +- **CustomDataValidatorConfig**: Validates user-provided values during stack component registration. +- **CustomDataValidator**: Engaged during the actual use of the component, allowing separation of configuration from implementation. -This design enables registration of flavors and components even if their dependencies are not installed locally, provided the flavor and config classes are in a different module/path from the actual validator. +This structure enables registration and component usage without requiring all dependencies to be installed locally. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === -### Amazon SageMaker Step Operator Overview +# Summary of SageMaker Step Operator Documentation -Amazon SageMaker provides compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator enables the execution of individual pipeline steps on SageMaker instances. +## Overview +Amazon SageMaker provides specialized compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator enables the execution of individual pipeline steps on SageMaker compute instances. -#### When to Use -Use the SageMaker step operator when: -- Your pipeline requires additional compute resources not provided by your orchestrator. +## When to Use +Use the SageMaker step operator if: +- Your pipeline steps require resources (CPU, GPU, memory) not provided by your orchestrator. - You have access to SageMaker. -#### Deployment Requirements -1. **IAM Role**: Create a role in the IAM console with at least `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. +## Deployment Requirements +1. **IAM Role**: Create a role in the IAM console with `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. [Setup Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-create-execution-role). 2. **ZenML AWS Integration**: Install using: ```shell zenml integration install aws ``` -3. **Docker**: Ensure Docker is installed and running. -4. **AWS Container Registry**: Set up as part of your stack. -5. **Remote Artifact Store**: Required for reading/writing artifacts. -6. **Instance Type**: Choose an instance type for execution. Refer to [available instance types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). -7. **Optional**: Create an experiment to group SageMaker runs. +3. **Docker**: Must be installed and running. +4. **AWS Container Registry**: Required for your stack. [Setup Guide](../container-registries/aws.md#how-to-deploy-it). +5. **Remote Artifact Store**: Needed for reading/writing step artifacts. +6. **Instance Type**: Choose an instance type from the [available types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). +7. **Optional Experiment**: Group SageMaker runs. [Creation Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html). -#### Authentication Methods -1. **Service Connector** (Recommended): - - Register a service connector and connect it to the step operator: - ```shell - zenml service-connector register --type aws -i - zenml step-operator register --flavor=sagemaker --role= --instance_type= - zenml step-operator connect --connector - zenml stack register -s ... --set - ``` +## Authentication Methods +### 1. Service Connector (Recommended) +Register an AWS Service Connector and connect it to your SageMaker step operator: +```shell +zenml service-connector register --type aws -i +zenml step-operator register --flavor=sagemaker --role= --instance_type= +zenml step-operator connect --connector +zenml stack register -s ... --set +``` -2. **Implicit Authentication**: - - For local orchestrators, ZenML will use the `default` profile in your AWS configuration. - - For remote orchestrators, ensure the environment can authenticate to AWS and assume the specified IAM role. +### 2. Implicit Authentication +- **Local Orchestrator**: Uses the `default` profile in `~/.aws/config`. +- **Remote Orchestrator**: Must authenticate to AWS and assume the specified IAM role. -#### Using the Step Operator -To execute steps in SageMaker, specify the step operator in the `@step` decorator: +Example for implicit authentication: +```shell +zenml step-operator register --flavor=sagemaker --role= --instance_type= +zenml stack register -s ... --set +python run.py # Authenticates with `default` profile +``` + +## Using the SageMaker Step Operator +To execute a step in SageMaker, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -3927,15 +4012,11 @@ def trainer(...) -> ...: """Train a model.""" ``` -ZenML builds a Docker image `/zenml:` for running steps in SageMaker. - -#### Additional Configuration -Additional settings can be specified using `SagemakerStepOperatorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for configurable attributes. - -#### Enabling CUDA for GPU -To run steps on GPU, follow the instructions for enabling CUDA, which is essential for full acceleration. +## Additional Configuration +You can customize the SageMaker step operator with `SagemakerStepOperatorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for settings. -For more details, consult the [ZenML documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.step_operators.sagemaker_step_operator.SagemakerStepOperator). +## Enabling CUDA for GPU +For GPU usage, follow the [GPU training instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. ================================================== @@ -3943,45 +4024,44 @@ For more details, consult the [ZenML documentation](https://sdkdocs.zenml.io/lat ### Kubernetes Step Operator Overview -ZenML's Kubernetes step operator enables the execution of individual steps in Kubernetes pods, ideal for pipelines requiring additional computing resources not provided by the orchestrator. +ZenML's Kubernetes step operator enables the execution of individual steps in Kubernetes pods, particularly useful when pipeline steps require additional computing resources not available from the orchestrator. #### When to Use -- When pipeline steps need more CPU, GPU, or memory than the orchestrator can provide. -- When a Kubernetes cluster is accessible. +- Steps require extra CPU, GPU, or memory resources. +- Access to a Kubernetes cluster is available. #### Deployment Requirements -1. **Kubernetes Cluster**: Must be deployed (refer to the cloud guide for deployment options). -2. **ZenML Kubernetes Integration**: Install with: +1. **Kubernetes Cluster**: Must be deployed using a cloud provider or custom infrastructure. +2. **ZenML Kubernetes Integration**: Install via: ```shell zenml integration install kubernetes ``` 3. **Docker or Remote Image Builder**: Required for building images. -4. **Remote Artifact Store**: Necessary for artifact read/write access. +4. **Remote Artifact Store**: Necessary for reading/writing artifacts. -**Recommendation**: Set up a Service Connector for connecting the Kubernetes step operator to the cluster, especially for cloud-managed clusters (AWS, GCP, Azure). +#### Recommended Setup +- Set up a **Service Connector** for connecting to the Kubernetes cluster, especially for cloud-managed clusters (AWS, GCP, Azure). -#### Registering and Using the Step Operator -You can register the step operator in two ways: - -1. **Using a Service Connector**: +#### Registering the Step Operator +1. **Using Service Connector**: ```shell zenml step-operator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml step-operator connect --connector ``` -2. **Using `kubectl`**: +2. **Using `kubectl` Client**: ```shell zenml step-operator register --flavor=kubernetes --kubernetes_context= ``` -Update the active stack to include the step operator: +#### Updating the Active Stack ```shell zenml stack update -s ``` #### Defining Steps -To execute a step in Kubernetes, specify the step operator in the `@step` decorator: +Specify the step operator in the `@step` decorator: ```python from zenml import step @@ -3990,16 +4070,22 @@ def trainer(...) -> ...: """Train a model.""" ``` -ZenML builds Docker images containing your code for execution in Kubernetes. - #### Interacting with Pods -For debugging, you can interact with pods using their labels: +Use `kubectl` for debugging. Pods are labeled with: +- `run`: ZenML run name +- `pipeline`: ZenML pipeline name + +To delete pods related to a specific pipeline: ```shell kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline ``` #### Additional Configuration -Customize the Kubernetes step operator using `KubernetesStepOperatorSettings`: +Use `KubernetesStepOperatorSettings` for advanced configurations: +- **Pod Settings**: Node selectors, labels, affinity, tolerations, image pull secrets. +- **Service Account**: Specify the service account for pods. + +Example Configuration: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings @@ -4008,7 +4094,7 @@ kubernetes_settings = KubernetesStepOperatorSettings( "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, - "limits": {"cpu": "4", "memory": "8Gi"} + "limits": {"cpu": "4", "memory": "8Gi"}, }, "service_account_name": "zenml-pipeline-runner" } @@ -4019,49 +4105,53 @@ def my_kubernetes_step(): ... ``` -Refer to the SDK docs for a complete list of attributes and detailed configuration options. - -#### Enabling CUDA for GPU +#### GPU Configuration To run steps on GPU, follow specific instructions to enable CUDA for full acceleration. +For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.flavors.kubernetes_step_operator_flavor.KubernetesStepOperatorSettings) and the [Kubernetes step operator documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.step_operators.kubernetes_step_operator.KubernetesStepOperator). + ================================================== === File: docs/book/component-guide/step-operators/modal.md === ### Modal Step Operator Overview -**Modal** is a cloud infrastructure platform that provides specialized compute instances for running code, particularly efficient for building Docker images and provisioning hardware. The **ZenML Modal step operator** allows submission of individual steps to Modal compute instances. +**Modal** is a cloud platform designed for efficient code execution, particularly for tasks involving Docker image building and hardware provisioning. The **ZenML Modal step operator** allows users to run individual steps on Modal compute instances. #### When to Use + +Utilize the Modal step operator when you need: - Fast execution for resource-intensive steps (CPU, GPU, memory). -- Specific hardware requirements (GPU type, CPU count, memory). +- Precise hardware specifications for each step. - Access to Modal. #### Deployment Steps -1. **Sign Up**: Create a Modal account. + +1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). 2. **Install CLI**: Run: ```shell pip install modal modal setup ``` - -#### Usage Requirements -- Install ZenML Modal integration: - ```shell - zenml integration install modal - ``` -- Ensure Docker is installed and running. -- Set up a cloud artifact store and a cloud container registry compatible with ZenML. +3. **Requirements**: + - ZenML `modal` integration: + ```shell + zenml integration install modal + ``` + - Docker installed and running. + - A cloud artifact store and a cloud container registry in your stack. #### Registering the Step Operator -Register the step operator and update your stack: + +To register the step operator, use: ```shell zenml step-operator register --flavor=modal zenml stack update -s ... ``` -#### Executing Steps -Use the step operator in the `@step` decorator: +#### Using the Step Operator + +Specify the step operator in the `@step` decorator: ```python from zenml import step @@ -4069,10 +4159,11 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` -ZenML builds a Docker image for execution in Modal. +ZenML will create a Docker image for execution on Modal. #### Additional Configuration -Specify hardware requirements using `ResourceSettings`: + +Define hardware requirements using `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml.integrations.modal.flavors import ModalStepOperatorSettings @@ -4090,167 +4181,162 @@ resource_settings = ResourceSettings(cpu=2, memory="32GB") def my_modal_step(): ... ``` -- The `cpu` parameter in `ResourceSettings` is a soft minimum limit. -- Example cost for 2 CPUs and 32GB memory is approximately $1.03/hour. +- The `cpu` parameter is a soft minimum limit; actual usage may exceed this. +- Example cost calculation: 2 CPUs and 32GB memory would cost approximately $1.03/hour. This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu). -#### Notes -- Region and cloud provider settings are available for Modal Enterprise and Team plans. -- Use looser settings to prevent execution failures; Modal provides detailed error messages for troubleshooting. For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). +**Note**: Some settings (region, cloud provider) are exclusive to Modal Enterprise and Team plans. It's advisable to use broader settings to prevent execution failures, with detailed error messages provided by Modal for troubleshooting. For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/spark-kubernetes.md === -### Summary of Spark Step Operators Documentation +### Summary of Spark Step Operators in ZenML -The `spark` integration in ZenML includes two main step operators: - -1. **SparkStepOperator**: A base class for Spark-related step operators. -2. **KubernetesSparkStepOperator**: A subclass that launches ZenML steps as Spark applications on Kubernetes. +#### Overview +The `spark` integration in ZenML provides two key step operators for executing tasks on Spark: +1. **SparkStepOperator**: Base class for all Spark-related step operators. +2. **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications on a Kubernetes cluster. #### SparkStepOperator Configuration +The configuration for `SparkStepOperator` includes: +- **master**: URL for the Spark cluster (supports Kubernetes, Mesos, YARN). +- **deploy_mode**: Can be 'cluster' (default) or 'client', determining where the driver node runs. +- **submit_kwargs**: JSON string for additional Spark parameters. -The `SparkStepOperatorConfig` class defines key configuration parameters: - -- `master`: The master URL for the Spark cluster (supports Kubernetes, Mesos, YARN). -- `deploy_mode`: Can be 'cluster' (default) or 'client', indicating where the driver node runs. -- `submit_kwargs`: Optional JSON string for additional Spark parameters. +**Code Example:** +```python +class SparkStepOperatorConfig(BaseStepOperatorConfig): + master: str + deploy_mode: str = "cluster" + submit_kwargs: Optional[Dict[str, Any]] = None +``` -**Key Methods:** -- `_resource_configuration`: Configures Spark resources. -- `_backend_configuration`: Configures backend settings for cluster managers. -- `_io_configuration`: Configures input/output sources. +#### Implementation +The `SparkStepOperator` includes methods for configuring resources, backends, I/O, and launching Spark jobs: +- `_resource_configuration`: Maps ZenML resource settings to Spark. +- `_backend_configuration`: Configures Spark for specific cluster managers. +- `_io_configuration`: Sets up input/output sources. - `_additional_configuration`: Appends user-defined parameters. -- `_launch_spark_job`: Executes a Spark job using `spark-submit`. +- `_launch_spark_job`: Executes the Spark job using `spark-submit`. -**Warning**: The `_io_configuration` method is effective only with `S3ArtifactStore` requiring authentication. +**Code Example:** +```python +class SparkStepOperator(BaseStepOperator): + def launch(self, info: "StepRunInfo", entrypoint_command: List[str]) -> None: + """Launches the step on Spark.""" +``` #### KubernetesSparkStepOperator +This operator extends `SparkStepOperator` for Kubernetes, adding: +- **namespace**: Kubernetes namespace for driver and executor pods. +- **service_account**: Service account for Spark components. -This operator extends `SparkStepOperator` and includes additional configuration parameters: - -- `namespace`: Kubernetes namespace for driver and executor pods. -- `service_account`: Service account for Spark components. - -**Backend Configuration**: The `_backend_configuration` method is tailored for Kubernetes, adjusting Spark settings accordingly. - -#### Usage Scenarios - -Use the Spark step operator when: -- Handling large datasets. -- Designing steps that benefit from distributed computing. - -#### Deployment Steps - -To deploy `KubernetesSparkStepOperator`, follow these steps: +**Code Example:** +```python +class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): + namespace: Optional[str] = None + service_account: Optional[str] = None +``` -1. **Remote ZenML Server**: Refer to the deployment guide. -2. **Kubernetes Cluster**: Set up using various cloud providers or custom infrastructure. For AWS, follow the Spark EKS Setup Guide. +The `_backend_configuration` method is tailored for Kubernetes, building and pushing Docker images. -**EKS Setup Guide**: -- Create IAM roles for EKS cluster and EC2 nodes. -- Create an EKS cluster and note the cluster name and API server endpoint. -- Add a node group with recommended instance types. +#### Usage +Use the Spark step operator for large data processing and distributed computing. To deploy `KubernetesSparkStepOperator`, set up: +- A remote ZenML server. +- A Kubernetes cluster (e.g., AWS EKS). -**Docker Image for Spark**: -- Use Spark's Docker images or build your own with the `docker-image-tool`. -- Download required packages (`hadoop-aws`, `aws-java-sdk-bundle`) and build the image. +**EKS Setup Steps:** +1. Create IAM roles for EKS. +2. Set up the EKS cluster. +3. Create a Docker image for Spark drivers and executors using the `docker-image-tool`. -**RBAC Configuration**: Create a `rbac.yaml` file for Kubernetes access and apply it using `kubectl`. +**RBAC Configuration:** +Create a `rbac.yaml` file for Kubernetes permissions and apply it. -#### Using the KubernetesSparkStepOperator +**Code Example:** +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: spark-namespace +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: spark-service-account + namespace: spark-namespace +``` -To use the operator: -- Install the ZenML `spark` integration. -- Ensure Docker and a remote artifact store are set up. -- Register the step operator and stack: +#### Registering the Step Operator +To use the `KubernetesSparkStepOperator`, install the Spark integration and register the operator: ```bash +zenml integration install spark zenml step-operator register spark_step_operator \ - --flavor=spark-kubernetes \ - --master=k8s://$EKS_API_SERVER_ENDPOINT \ - --namespace= \ - --service_account= - -zenml stack register spark_stack \ - -o default \ - -s spark_step_operator \ - -a spark_artifact_store \ - -c spark_container_registry \ - -i local_builder \ - --set + --flavor=spark-kubernetes \ + --master=k8s://$EKS_API_SERVER_ENDPOINT \ + --namespace= \ + --service_account= ``` -**Defining Steps**: -Use the `@step` decorator to define steps: +#### Running Steps +Define steps using the `@step` decorator: ```python -from zenml import step - @step(step_operator=) def step_on_spark(...) -> ...: ... ``` -After execution, verify Spark driver pods with: - -```bash -kubectl get pods -n $KUBERNETES_NAMESPACE -``` - -**Dynamic Operator Usage**: Use the ZenML Client to dynamically reference the active stack's step operator. - #### Additional Configuration +For more configurations, refer to the `SparkStepOperatorSettings` documentation. -For more configuration options, refer to `SparkStepOperatorSettings` and the SDK documentation for available attributes. +This summary encapsulates the essential details and code snippets necessary for understanding and utilizing the Spark step operators in ZenML. ================================================== === File: docs/book/component-guide/step-operators/azureml.md === -### AzureML Step Operator Overview +### Summary: Executing Individual Steps in AzureML with ZenML -AzureML provides specialized compute instances for training jobs and a UI for model management. ZenML's AzureML step operator allows submission of individual pipeline steps to AzureML compute instances. +**Overview**: ZenML integrates with AzureML to run training jobs on specialized compute instances. The AzureML step operator allows submission of individual pipeline steps to AzureML. #### When to Use AzureML Step Operator -- When pipeline steps require computing resources not available from your orchestrator. -- If you have access to AzureML; for other cloud providers, consider SageMaker or Vertex step operators. +- If pipeline steps require compute resources not provided by your orchestrator. +- If you have access to AzureML (for other cloud providers, consider SageMaker or Vertex). #### Deployment Steps -1. **Create Azure Workspace**: Set up a Machine Learning workspace on Azure, including a container registry and storage account. -2. **(Optional) Create Compute Instance/Cluster**: Use Azure Machine Learning Studio to create a compute instance or cluster. If omitted, the operator will use serverless compute or provision a new target. -3. **(Optional) Create Service Principal**: For authentication via a service connector. +1. Create an Azure Machine Learning workspace, including a container registry and storage account. +2. (Optional) Create a compute instance or cluster in AzureML. +3. (Optional) Set up a Service Principal for authentication if using a service connector. -#### Usage Requirements +#### Requirements - Install ZenML Azure integration: - ```shell - zenml integration install azure - ``` + ```shell + zenml integration install azure + ``` - Ensure Docker is installed and running. - Set up an Azure container registry and artifact store. +- Create an AzureML workspace. #### Authentication Methods -1. **Service Connector** (Recommended): - - Register a service connector with permissions to manage AzureML jobs. - ```shell - zenml service-connector register --type azure -i - zenml step-operator register \ - --flavor=azureml \ - --subscription_id= \ - --resource_group= \ - --workspace_name= - zenml step-operator connect --connector - zenml stack register -s ... --set - ``` +1. **Service Connector** (recommended): + - Register a service connector and connect it to the AzureML step operator. + ```shell + zenml service-connector register --type azure -i + zenml step-operator register --flavor=azureml --subscription_id= --resource_group= --workspace_name= + zenml step-operator connect --connector + zenml stack register -s ... --set + ``` 2. **Implicit Authentication**: - - For local orchestrators, ZenML uses Azure CLI configuration. - - For remote orchestrators, ensure they can authenticate to Azure. + - For local orchestrators, ZenML uses Azure CLI for authentication. + - For remote orchestrators, ensure they can authenticate to Azure. -#### Executing Steps -To execute steps in AzureML, specify the step operator in the `@step` decorator: +#### Using the AzureML Step Operator +To execute a step in AzureML, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -4258,13 +4344,13 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` -ZenML builds a Docker image for the pipeline. +ZenML builds a Docker image for the step execution. -#### Additional Configuration +#### Configuration Use `AzureMLStepOperatorSettings` to configure compute resources: - **Serverless Compute**: Default mode. -- **Compute Instance**: Requires `compute_name`, can create or use existing instance. -- **Compute Cluster**: Requires `compute_name`, can create or use existing cluster. +- **Compute Instance**: Requires `compute_name`. +- **Compute Cluster**: Also requires `compute_name`. Example configuration for a compute instance: ```python @@ -4283,44 +4369,47 @@ def my_azureml_step(): ``` #### GPU Support -To run steps on GPU, follow specific instructions to enable CUDA for full acceleration. +To run steps on GPU, follow additional customization instructions to enable CUDA for full acceleration. -For further details, refer to the AzureML documentation and ZenML SDK documentation. +For more details, refer to the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.flavors.azureml_step_operator_flavor.AzureMLStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/step-operators.md === -# Step Operators +### Step Operators Overview -The step operator allows for executing individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like [Spark](https://spark.apache.org/). +**Purpose**: The step operator allows execution of individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like Spark. -## Comparison to Orchestrators -The orchestrator is a mandatory component that executes all pipeline steps in order and provides scheduling features. In contrast, the step operator is used to execute individual steps in separate environments when the orchestrator's environment is insufficient. +**Comparison to Orchestrators**: While the orchestrator is essential for executing all pipeline steps in order and managing scheduling, the step operator is used for executing specific steps in environments that the orchestrator cannot provide. -## When to Use It -Use a step operator when pipeline steps require resources unavailable in the orchestrator's runtime environment. For example, if a step needs a GPU for training a computer vision model, but the orchestrator (like [Kubeflow](../orchestrators/kubeflow.md)) lacks GPU nodes, a step operator such as [SageMaker](sagemaker.md), [Vertex](vertex.md), or [AzureML](azureml.md) should be used. +### When to Use Step Operators + +Use a step operator when pipeline steps need resources unavailable in the orchestrator's runtime environment. For example, if a step requires GPU resources for training a computer vision model but the orchestrator runs on a non-GPU Kubernetes cluster, a step operator like SageMaker, Vertex, or AzureML should be used. + +### Available Step Operator Flavors -## Step Operator Flavors ZenML provides the following step operators for major cloud providers: -| Step Operator | Flavor | Integration | Notes | -|---------------|-------------|-------------|----------------------------------------| -| [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | -| [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | -| [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | -| [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | -| [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | -| [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | -| [Custom Implementation](custom.md) | _custom_ | | Allows for custom step operator implementations | +| Step Operator | Flavor | Integration | Notes | +|----------------|--------------|-------------|-------------------------------------| +| AzureML | `azureml` | `azure` | Executes steps using AzureML | +| Kubernetes | `kubernetes`| `kubernetes`| Executes steps using Kubernetes Pods| +| Modal | `modal` | `modal` | Executes steps using Modal | +| SageMaker | `sagemaker` | `aws` | Executes steps using SageMaker | +| Spark | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | +| Vertex | `vertex` | `gcp` | Executes steps using Vertex AI | +| Custom | _custom_ | | Allows for custom implementation | To view available flavors, use: + ```shell zenml step-operator flavor list ``` -## How to Use It -You do not need to interact directly with ZenML step operators in your code. If the desired step operator is part of your active [ZenML stack](../../user-guide/production-guide/understand-stacks.md), specify it in the `@step` decorator: +### How to Use Step Operators + +You do not need to interact directly with ZenML step operators in your code. Simply specify the desired step operator in the `@step` decorator: ```python from zenml import step @@ -4330,65 +4419,68 @@ def my_step(...) -> ...: ... ``` -### Specifying Per-Step Resources -For additional hardware resources, specify them in your steps as detailed [here](../../how-to/pipeline-development/training-with-gpus/README.md). +#### Specifying Resources -### Enabling CUDA for GPU-Backed Hardware -To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. +For steps requiring additional hardware resources, specify them accordingly. For GPU usage, follow the instructions to enable CUDA for full acceleration. + +### Important Notes + +- Ensure to follow specific guidelines for GPU-backed hardware to enable CUDA. +- Refer to the documentation for details on specifying resources and configurations. ================================================== === File: docs/book/component-guide/step-operators/vertex.md === -### Summary of Executing Steps in Vertex AI +### Summary: Executing Steps in Vertex AI with ZenML -**Overview**: -Google Cloud's Vertex AI provides specialized compute instances for training jobs and a UI for model management. ZenML's Vertex AI step operator allows submission of individual pipeline steps to Vertex AI compute instances. +**Overview**: Vertex AI provides specialized compute instances for training jobs and a UI for model management. ZenML's Vertex AI step operator allows submission of individual steps to Vertex AI. -**When to Use**: +#### When to Use - Use the Vertex step operator if: - - Your pipeline steps require additional computing resources (CPU, GPU, memory). + - Your pipeline steps require resources (CPU, GPU, memory) not available from your orchestrator. - You have access to Vertex AI. -**Deployment Steps**: -1. Enable Vertex AI. -2. Create a service account with permissions for Vertex AI jobs (`roles/aiplatform.admin`) and container registry access (`roles/storage.admin`). +#### Deployment Steps +1. **Enable Vertex AI**: [Enable here](https://console.cloud.google.com/vertex-ai). +2. **Create a Service Account**: Assign `roles/aiplatform.admin` and `roles/storage.admin`. -**Usage Requirements**: -- Install ZenML's GCP integration: - ```shell - zenml integration install gcp - ``` +#### Usage Requirements +- Install ZenML GCP integration: + ```shell + zenml integration install gcp + ``` - Ensure Docker is installed and running. - Enable Vertex AI and have a service account file. - Set up a GCR container registry. -- Optionally, specify a machine type (default is `n1-standard-4`). -- Configure a remote artifact store for artifact management. +- Optionally specify a machine type (default: `n1-standard-4`). +- Configure a remote artifact store for shared access to step artifacts. -**Authentication Options**: -1. **Using `gcloud` CLI**: - ```shell - gcloud auth login - zenml step-operator register --flavor=vertex --project= --region= - ``` -2. **Using a service account key file**: - ```shell - zenml step-operator register --flavor=vertex --project= --region= --service_account_path= - ``` -3. **Using a GCP Service Connector** (recommended): - ```shell - zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ - zenml step-operator register --flavor=vertex --region= - zenml step-operator connect --connector - ``` +#### Authentication Methods +1. **Using gcloud CLI**: + ```shell + gcloud auth login + zenml step-operator register --flavor=vertex --project= --region= + ``` +2. **Service Account Key File**: + ```shell + zenml step-operator register --flavor=vertex --project= --region= --service_account_path= + ``` +3. **GCP Service Connector (recommended)**: + ```shell + zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ + zenml step-operator register --flavor=vertex --region= + zenml step-operator connect --connector + ``` -**Updating the Active Stack**: +#### Update Active Stack +Add the step operator to your active stack: ```shell zenml stack update -s ``` -**Defining Steps**: -Use the registered step operator in your pipeline: +#### Define Steps +Use the registered step operator in the `@step` decorator: ```python from zenml import step @@ -4397,16 +4489,14 @@ def trainer(...) -> ...: """Train a model.""" ``` -**Docker Image**: ZenML builds a Docker image named `/zenml:` for running steps in Vertex AI. - -**Additional Configuration**: +#### Additional Configuration Specify service account, network, and reserved IP ranges: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account= --network= --reserved_ip_ranges= ``` -**VertexStepOperatorSettings**: -Customize settings for the step operator: +#### Custom Settings +Pass `VertexStepOperatorSettings` for further customization: ```python from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @@ -4422,21 +4512,22 @@ def trainer(...) -> ...: """Train a model.""" ``` -**CUDA for GPU**: Follow specific instructions to enable CUDA for GPU acceleration when using the step operator. +#### GPU Configuration +For GPU usage, follow the instructions to enable CUDA for full acceleration. -For further details, refer to the SDK documentation for available attributes and configuration options. +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.flavors.vertex_step_operator_flavor.VertexStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/custom.md === -### Summary: Developing a Custom Step Operator in ZenML +# Custom Step Operator Development in ZenML -#### Overview -To develop a custom step operator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +## Overview +This documentation provides guidance on developing a custom step operator in ZenML. It is recommended to first review the general guide on writing custom component flavors in ZenML for foundational knowledge. -#### Base Abstraction -The `BaseStepOperator` is the abstract class for running pipeline steps in separate environments. It provides a basic interface: +## Base Abstraction +The `BaseStepOperator` is the abstract class for implementing step operators, which run pipeline steps in separate environments. Key components include: ```python from abc import ABC, abstractmethod @@ -4453,59 +4544,46 @@ class BaseStepOperator(StackComponent, ABC): @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: - """Execute a step synchronously.""" - -class BaseStepOperatorFlavor(Flavor): - """Base class for all ZenML step operator flavors.""" - - @property - @abstractmethod - def name(self) -> str: - """Returns the name of the flavor.""" - - @property - def type(self) -> StackComponentType: - return StackComponentType.STEP_OPERATOR + """Executes a step with the given command.""" +``` - @property - def config_class(self) -> Type[BaseStepOperatorConfig]: - return BaseStepOperatorConfig +## Creating a Custom Step Operator +To create a custom flavor for a step operator, follow these steps: - @property - @abstractmethod - def implementation_class(self) -> Type[BaseStepOperator]: - """Returns the implementation class for this flavor.""" -``` +1. **Subclass `BaseStepOperator`**: Implement the `launch` method, which prepares the execution environment and runs the entrypoint command. +2. **Handle Resources**: If applicable, manage resources defined in `info.config.resource_settings`. +3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for any custom parameters. +4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, providing a name for the flavor. -#### Steps to Create a Custom Step Operator -1. **Subclass `BaseStepOperator`**: Implement the `launch` method to prepare the execution environment (e.g., Docker) and run the `entrypoint_command`. -2. **Handle Resources**: Manage resources specified in `info.config.resource_settings`. -3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for custom parameters. -4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, providing a name for your flavor. +### Registering the Flavor +Register your custom flavor using the CLI: -**Registering the Flavor**: ```shell zenml step-operator flavor register ``` -Example: + +For example: + ```shell zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- After registration, list available flavors: +### Listing Available Flavors +After registration, verify the new flavor: + ```shell zenml step-operator flavor list ``` -#### Interaction in ZenML Workflow -- `CustomStepOperatorFlavor` is used during flavor creation. -- `CustomStepOperatorConfig` validates user inputs during registration. -- `CustomStepOperator` is utilized when the component is in use, allowing separation of configuration from implementation. +## Important Considerations +- The `CustomStepOperatorFlavor` is used during flavor creation. +- The `CustomStepOperatorConfig` is utilized for validating user input during registration. +- The `CustomStepOperator` is engaged when the component is executed, allowing for separation of configuration and implementation. + +## Enabling GPU Support +To run steps on GPU, follow the instructions to enable CUDA for full acceleration. This involves additional settings customization. -#### Enabling GPU Support -For GPU execution, follow the instructions to enable CUDA for full acceleration. Refer to [GPU training documentation](../../how-to/pipeline-development/training-with-gpus/README.md). +For further details, refer to the complete SDK documentation and relevant guides. ================================================== @@ -4513,19 +4591,19 @@ For GPU execution, follow the instructions to enable CUDA for full acceleration. ### Slack Alerter Documentation Summary -**Overview**: The `SlackAlerter` allows sending messages and questions to a specified Slack channel from ZenML pipelines. +The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. #### Setup Instructions 1. **Create a Slack App**: - Set up a Slack workspace and create a Slack App with a bot. - - Assign the following permissions in the `OAuth & Permissions` tab: + - Grant the following permissions in the `OAuth & Permissions` tab: - `chat:write` - `channels:read` - `channels:history` - - Invite the app to your desired channel using `/invite` or through channel settings. + - Invite the app to your desired channel using `/invite` or channel settings. -2. **Registering Slack Alerter in ZenML**: +2. **Register Slack Alerter in ZenML**: - Install the Slack integration: ```shell zenml integration install slack -y @@ -4538,169 +4616,165 @@ For GPU execution, follow the instructions to enable CUDA for full acceleration. --slack_token={{slack_token.oauth_token}} \ --slack_channel_id= ``` - - Find `` in channel details (starts with `C....`) and `` in app settings. - -3. **Add Alerter to Stack**: - ```shell - zenml stack register ... -al slack_alerter --set - ``` #### Usage -1. **Direct Methods**: - - Use `post()` and `ask()` methods: - ```python - from zenml import pipeline, step - from zenml.client import Client +- **Direct Methods**: + Use `post()` and `ask()` methods from the active alerter: + ```python + from zenml import pipeline, step + from zenml.client import Client - @step - def post_statement() -> None: - Client().active_stack.alerter.post("Step finished!") + @step + def post_statement() -> None: + Client().active_stack.alerter.post("Step finished!") - @step - def ask_question() -> bool: - return Client().active_stack.alerter.ask("Should I continue?") + @step + def ask_question() -> bool: + return Client().active_stack.alerter.ask("Should I continue?") - @pipeline(enable_cache=False) - def my_pipeline(): - post_statement() - ask_question() + @pipeline(enable_cache=False) + def my_pipeline(): + post_statement() + ask_question() - if __name__ == "__main__": - my_pipeline() - ``` + if __name__ == "__main__": + my_pipeline() + ``` -2. **Custom Settings**: - - Use different channel IDs: - ```python - @step(settings={"alerter": {"slack_channel_id": }}) - def post_statement() -> None: - Client().active_stack.alerter.post("Posting to another channel!") - ``` +- **Custom Settings**: + You can specify a different channel ID during runtime: + ```python + @step(settings={"alerter": {"slack_channel_id": }}) + def post_statement() -> None: + Client().active_stack.alerter.post("Posting to another channel!") + ``` -3. **Using `SlackAlerterParameters` and `SlackAlerterPayload`**: - - Customize messages: - ```python - from zenml import pipeline, step, get_step_context - from zenml.client import Client - from zenml.integrations.slack.alerters.slack_alerter import ( - SlackAlerterParameters, SlackAlerterPayload - ) - - @step - def post_statement() -> None: - params = SlackAlerterParameters( - payload=SlackAlerterPayload( - pipeline_name=get_step_context().pipeline.name, - step_name=get_step_context().step_run.name, - stack_name=Client().active_stack.name, - ), - ) - Client().active_stack.alerter.post( - message="This is a message with additional information.", - params=params - ) - ``` +- **Using `SlackAlerterParameters` and `SlackAlerterPayload`**: + Customize messages with additional information: + ```python + from zenml import pipeline, step, get_step_context + from zenml.client import Client + from zenml.integrations.slack.alerters.slack_alerter import ( + SlackAlerterParameters, SlackAlerterPayload + ) -4. **Predefined Steps**: - - Use built-in steps for simplicity: - ```python - from zenml import pipeline - from zenml.integrations.slack.steps import ( - slack_alerter_post_step, - slack_alerter_ask_step - ) + @step + def post_statement() -> None: + params = SlackAlerterParameters( + payload=SlackAlerterPayload( + pipeline_name=get_step_context().pipeline.name, + step_name=get_step_context().step_run.name, + stack_name=Client().active_stack.name, + ), + ) + Client().active_stack.alerter.post( + message="This is a message with additional information about your pipeline.", + params=params + ) + ``` - @pipeline(enable_cache=False) - def my_pipeline(): - slack_alerter_post_step("Posting a statement.") - slack_alerter_ask_step("Asking a question. Should I continue?") +- **Predefined Steps**: + Use built-in steps for simplicity: + ```python + from zenml import pipeline + from zenml.integrations.slack.steps.slack_alerter_post_step import slack_alerter_post_step + from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_ask_step - if __name__ == "__main__": - my_pipeline() - ``` + @pipeline(enable_cache=False) + def my_pipeline(): + slack_alerter_post_step("Posting a statement.") + slack_alerter_ask_step("Asking a question. Should I continue?") -For further details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). + if __name__ == "__main__": + my_pipeline() + ``` + +For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). ================================================== === File: docs/book/component-guide/alerters/alerters.md === -### Alerters +# Alerters in ZenML -**Alerters** enable sending messages to chat services (e.g., Slack, Discord) from pipelines, facilitating notifications for failures, monitoring, and human-in-the-loop ML. +**Alerters** enable automated message sending to chat services (e.g., Slack, Discord) from ZenML pipelines, facilitating immediate notifications for failures and monitoring. -#### Alerter Flavors -Currently available integrations include: -- **SlackAlerter**: Interacts with a Slack channel. -- **DiscordAlerter**: Interacts with a Discord channel. -- **Custom Implementation**: Extend the alerter abstraction for other services. +## Available Alerter Flavors + +Currently supported alerters: +- **SlackAlerter**: Integrates with Slack channels. +- **DiscordAlerter**: Integrates with Discord channels. +- **Custom Implementation**: Allows building custom alerters for other chat services. + +| Alerter | Flavor | Integration | Notes | +|---------|---------|-------------|-------------------------------------------| +| Slack | `slack` | `slack` | Interacts with a Slack channel | +| Discord | `discord`| `discord` | Interacts with a Discord channel | +| Custom | _custom_| | Extend the alerter abstraction | To view available alerter flavors, use: ```shell zenml alerter flavor list ``` -#### Usage +## Usage + 1. **Register an Alerter**: ```shell zenml alerter register ... ``` + 2. **Add to Stack**: ```shell zenml stack register ... -al ``` -3. **Import and Use**: Import the standard steps from the respective integration for use in pipelines. + +3. **Import and Use**: Import standard steps from the alerter integration and utilize them in your pipelines. + +For more details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/alerters/discord.md === -### Discord Alerter Overview +### Discord Alerter Documentation Summary -The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two key steps: +The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two main steps: -1. **`discord_alerter_post_step`**: Sends a message to a Discord channel and returns success status. -2. **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves the action. +1. **`discord_alerter_post_step`**: Posts a message and returns success status. +2. **`discord_alerter_ask_step`**: Posts a message and waits for user feedback, returning `True` only if the user approves. #### Use Cases - Immediate notifications for failures (e.g., model performance issues). -- Human-in-the-loop integration before executing critical steps (e.g., model deployment). +- Human-in-the-loop integration for critical steps (e.g., model deployments). ### Requirements -Install the Discord integration: +To use the `DiscordAlerter`, install the Discord integration: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot 1. Create a Discord workspace and channel. -2. Create a Discord App with a bot. Ensure the bot has permissions to send and receive messages. +2. Create a Discord App with a bot and obtain the ``. +3. Ensure the bot has permissions to send and receive messages. ### Registering a Discord Alerter -Register the `discord` alerter in ZenML: +Register the `discord` alerter with the following command: ```shell zenml alerter register discord_alerter \ --flavor=discord \ --discord_token= \ --default_discord_channel_id= ``` -Add the alerter to your stack: +Add it to your stack: ```shell zenml stack register ... -al discord_alerter ``` -#### Parameters -- **DISCORD_CHANNEL_ID**: Copy from the channel settings (enable Developer Mode if not visible). -- **DISCORD_TOKEN**: Obtain from the bot setup instructions. - -**Permissions Required**: -- Read Messages/View Channels -- Send Messages -- Send Messages in Threads - ### Using the Discord Alerter -Import the steps in your pipeline: +Import the steps and use them in your pipeline. A formatter step is typically needed to generate the message. Example usage: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @@ -4726,12 +4800,12 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr === File: docs/book/component-guide/alerters/custom.md === -### Develop a Custom Alerter +### Custom Alerter Development in ZenML -Before creating a custom alerter, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +This documentation outlines the process for creating a custom alerter in ZenML, which involves implementing specific methods and configuring the alerter. #### Base Abstraction -The base abstraction for alerters includes two abstract methods: +The base class for alerters, `BaseAlerter`, defines two abstract methods: - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service, returning `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. @@ -4744,15 +4818,10 @@ class BaseAlerter(StackComponent, ABC): return True ``` -#### Building Your Own Custom Alerter -Creating a custom alerter involves three steps: - -1. **Inherit from `BaseAlerter`** and implement `post()` and `ask()` methods: +#### Steps to Create a Custom Alerter +1. **Inherit from BaseAlerter**: Implement the `post()` and `ask()` methods in your custom class. ```python -from typing import Optional -from zenml.alerter import BaseAlerter, BaseAlerterStepParameters - class MyAlerter(BaseAlerter): def post(self, message: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... @@ -4763,24 +4832,16 @@ class MyAlerter(BaseAlerter): return True ``` -2. **Implement a configuration object** if needed: +2. **Create a Configuration Class** (optional): Define parameters for your alerter. ```python -from zenml.alerter.base_alerter import BaseAlerterConfig - class MyAlerterConfig(BaseAlerterConfig): my_param: str ``` -3. **Create a flavor object** that combines implementation and configuration: +3. **Define a Flavor Class**: Combine the implementation and configuration. ```python -from typing import Type, TYPE_CHECKING -from zenml.alerter import BaseAlerterFlavor - -if TYPE_CHECKING: - from zenml.stack import StackComponent, StackComponentConfig - class MyAlerterFlavor(BaseAlerterFlavor): @property def name(self) -> str: @@ -4798,48 +4859,48 @@ class MyAlerterFlavor(BaseAlerterFlavor): ``` #### Registering the Custom Alerter -Register your new flavor via the CLI: +Register your new flavor using the CLI: ```shell zenml alerter flavor register ``` -Example registration: +For example: ```shell zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` -**Important Note**: Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - -After registration, list available alerter flavors: +#### Important Notes +- Ensure ZenML is initialized at the root of your repository to avoid resolution issues. +- After registration, list available alerter flavors: ```shell zenml alerter flavor list ``` -#### Key Points -- **MyAlerterFlavor** is used during flavor creation. -- **MyAlerterConfig** is used for validating values during stack component registration. -- **MyAlerter** is utilized when the component is in use, allowing separation of configuration from implementation. +#### Workflow Integration +- The `MyAlerterFlavor` is used during flavor creation. +- The `MyAlerterConfig` is utilized for validating user input during stack component registration. +- The `MyAlerter` class is invoked when the component is in use, allowing for separation of configuration and implementation. -This design enables registration of flavors and components even if their dependencies are not installed locally. +This structure supports modular development and enables the registration of flavors and components independently of their dependencies. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === -### Azure Blob Storage Artifact Store +# Azure Blob Storage with ZenML -The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store ZenML artifacts. It is ideal for scenarios where local storage is insufficient, such as when sharing results, using remote components, or scaling production-grade MLOps. +The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. -#### When to Use -- **Collaboration**: Share pipeline results with team members. -- **Remote Components**: Integrate with cloud-based orchestrators (e.g., Kubeflow). +## When to Use Azure Artifact Store +- **Team Collaboration**: Share pipeline results with team members or stakeholders. +- **Remote Components**: Integrate with remote orchestrators (e.g., Kubeflow, Kubernetes). - **Storage Limitations**: Overcome local storage constraints. -- **Scalability**: Handle production-scale demands. +- **Production Needs**: Handle large-scale pipelines. -#### Deployment Steps +## Deployment Steps 1. **Install Azure Integration**: ```shell zenml integration install azure -y @@ -4847,174 +4908,186 @@ The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage 2. **Register Azure Artifact Store**: - The root path URI must point to an Azure Blob Storage container in the format `az://container-name` or `abfs://container-name`. + - Example registration: ```shell zenml artifact-store register az_store -f azure --path=az://container-name zenml stack register custom_stack -a az_store ... --set ``` -#### Authentication Methods -- **Implicit Authentication**: Quick local setup without explicit credentials. Set environment variables for Azure account key, connection string, or service principal credentials. -- **Azure Service Connector**: Recommended for better security and integration with remote components. Register using: - ```shell - zenml service-connector register --type azure -i - ``` - Or configure with service principal: - ```shell - zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id - ``` +## Authentication Methods +- **Implicit Authentication**: Quick local setup using environment variables. +- **Azure Service Connector**: Recommended for better security and integration with other Azure components. + +### Implicit Authentication Setup +Set environment variables: +- For account key: + ```shell + export AZURE_STORAGE_ACCOUNT_NAME= + export AZURE_STORAGE_ACCOUNT_KEY= + ``` +- For service principal: + ```shell + export AZURE_STORAGE_CLIENT_ID= + export AZURE_STORAGE_CLIENT_SECRET= + export AZURE_STORAGE_TENANT_ID= + ``` + +### Azure Service Connector Setup +Register a service connector: +```shell +zenml service-connector register --type azure -i +``` +Non-interactive example: +```shell +zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id +``` -#### Connecting the Artifact Store -After setting up the Azure Service Connector, connect it to the Azure Artifact Store: +### Connect Artifact Store to Service Connector ```shell zenml artifact-store connect -i ``` -For non-interactive connection: +Non-interactive version: ```shell -zenml artifact-store connect --connector +zenml artifact-store connect --connector ``` -#### Using ZenML Secrets -You can store Azure credentials in a ZenML Secret for better management: +## Using ZenML Secrets for Authentication +Create a ZenML secret to store Azure credentials: ```shell zenml secret create az_secret --account_name='' --account_key='' ``` -Register the Artifact Store with the secret: +Register the artifact store using the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret ``` -#### Usage -Using the Azure Artifact Store is similar to other Artifact Store types in ZenML. For detailed configuration and usage, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). +## Usage +Once set up, the Azure Artifact Store functions like any other ZenML artifact store. For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === -### Summary of AWS S3 Artifact Store Documentation +### Summary of Storing Artifacts in AWS S3 Bucket with ZenML #### Overview -The S3 Artifact Store is a ZenML integration that utilizes AWS S3 or compatible services (like MinIO or Ceph RGW) for artifact storage. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. +The S3 Artifact Store is an integration in ZenML that utilizes AWS S3 or compatible services (like MinIO or Ceph RGW) for artifact storage. It is ideal for projects requiring shared access, remote components, or scalable storage solutions. #### Use Cases -Consider using the S3 Artifact Store when: -- You need to share pipeline results. -- Components are running in the cloud. -- Local storage is insufficient. -- Running pipelines at scale. +Consider the S3 Artifact Store when: +- Sharing pipeline results with team members. +- Integrating with remote orchestration tools (e.g., Kubeflow). +- Needing more storage than local machines can provide. +- Running production-grade MLOps pipelines. #### Deployment Steps -1. **Install the S3 Integration**: +1. **Install S3 Integration**: ```shell zenml integration install s3 -y ``` -2. **Register the S3 Artifact Store**: - - Mandatory parameter: `--path=s3://bucket-name`. - - Example: +2. **Register S3 Artifact Store**: + - The mandatory configuration is the S3 bucket URI: `s3://bucket-name`. + - Example registration: ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name - ``` - -3. **Set Up a Stack**: - ```shell zenml stack register custom_stack -a s3_store ... --set ``` -#### Authentication Methods -- **Implicit Authentication**: Quick local setup using AWS CLI credentials. - - Limitations: Some dashboard functionalities may not work, and remote components may face access issues. +3. **Authentication**: + - **Implicit Authentication**: Quick setup using local AWS CLI credentials. Requires AWS CLI installed. + - **AWS Service Connector** (recommended): Provides better security and access management. + ```shell + zenml service-connector register --type aws -i + zenml service-connector register --type aws --resource-type s3-bucket --resource-name --auto-configure + ``` -- **AWS Service Connector (Recommended)**: Provides better security and access management. - - Register using: - ```shell - zenml service-connector register --type aws -i - ``` - - Connect to a bucket: - ```shell - zenml artifact-store connect -i - ``` +4. **Connect Artifact Store to AWS Service Connector**: + ```shell + zenml artifact-store connect -i + ``` -#### ZenML Secret Management -You can store AWS access keys in ZenML secrets for enhanced security: -```shell -zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' -``` -Register the artifact store with the secret: -```shell -zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret -``` +5. **Using ZenML Secrets**: + - Store AWS credentials in a ZenML secret for better management: + ```shell + zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' + zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret + ``` #### Advanced Configuration -You can customize connections using: -- `client_kwargs`: For parameters like `endpoint_url`. -- `config_kwargs`: For advanced botocore client settings. -- `s3_additional_kwargs`: For S3 API-specific parameters. +You can customize the S3 Artifact Store with advanced options: +- `client_kwargs`: Pass parameters like `endpoint_url` and `region_name`. +- `config_kwargs`: Advanced parameters for client configuration. +- `s3_additional_kwargs`: Parameters for S3 API calls (e.g., `ServerSideEncryption`). -Example: +Example of advanced registration: ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` #### Usage -Using the S3 Artifact Store is similar to other artifact stores in ZenML. For detailed usage, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). +Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For further details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/local.md === -### Local Artifact Store +# Local Artifact Store in ZenML -The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that utilizes a local filesystem folder for artifact storage. +The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that utilizes a folder on your local filesystem for storing artifacts. -#### Use Cases -- Ideal for beginners or evaluations of ZenML without needing additional resources or managed object-store services (e.g., Amazon S3, Google Cloud Storage). -- Not suitable for production due to lack of sharing capabilities, access from other machines, and essential features like high-availability and scalability. +### Use Cases +- Ideal for beginners or evaluations, as it requires no additional resources or managed services (e.g., Amazon S3, Google Cloud Storage). +- Suitable for experimental phases where sharing artifacts is unnecessary. -#### Limitations -- Only compatible with local Orchestrators (e.g., local, local Kubeflow, local Kubernetes) and local Model Deployers (e.g., MLflow). -- Does not support Step Operators that run in remote environments. +### Limitations +- Not intended for production use; artifacts cannot be shared across teams or accessed from other machines. +- Lacks features like high availability, scalability, and backup. +- Compatible only with local components: + - **Orchestrators**: Local Orchestrator, Local Kubeflow, Local Kubernetes. + - **Model Deployers**: Local Model Deployers (e.g., MLflow). + - **Step Operators**: Not compatible due to their remote execution nature. -Transitioning to a team or production setting requires replacing the Local Artifact Store with a more suitable option without code changes. +Transitioning to a team or production setting requires replacing the Local Artifact Store with a more suitable flavor without code changes. -#### Deployment -The default stack in ZenML includes a Local Artifact Store: +### Deployment +The default ZenML stack includes a Local Artifact Store: ```shell $ zenml stack list $ zenml artifact-store describe ``` -Artifacts are stored in a specified local path. You can create additional instances: +Artifacts are stored in a local folder, as indicated by the `PATH` in the output. You can create additional Local Artifact Stores: ```shell -# Register a local artifact store +# Register the local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set ``` -**Note:** The Local Artifact Store accepts a `path` parameter during registration, but using the default path is recommended to avoid issues. +**Note**: The Local Artifact Store accepts a `path` parameter during registration, but using the default path is recommended to avoid issues with local stack components. -For detailed implementation and configuration, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). +For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). -#### Usage -Using the Local Artifact Store is similar to other Artifact Store flavors, with artifacts stored locally. +### Usage +Using the Local Artifact Store is similar to other Artifact Store flavors, with the main difference being local storage. ================================================== === File: docs/book/component-guide/artifact-stores/gcp.md === -### Google Cloud Storage (GCS) Artifact Store +### Google Cloud Storage (GCS) Artifact Store in ZenML -The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) to store artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. +The GCS Artifact Store is a component of the GCP ZenML integration that uses Google Cloud Storage to store ZenML artifacts. It is suitable for projects needing shared storage, remote components, or production-grade MLOps. -#### Use Cases -Consider using GCS when: -- You need to share pipeline results with team members or stakeholders. -- Your stack includes remote components (e.g., Kubeflow). -- Local storage is insufficient. -- You require scalable storage for production pipelines. +#### When to Use GCS Artifact Store +- **Team Collaboration**: Share pipeline results with team members or stakeholders. +- **Remote Components**: Integrate with remote orchestrators like Kubeflow or Kubernetes. +- **Storage Limitations**: Overcome local storage constraints. +- **Scalability**: Handle production-scale pipeline demands. #### Deployment Steps 1. **Install GCP Integration**: @@ -5023,115 +5096,115 @@ Consider using GCS when: ``` 2. **Register GCS Artifact Store**: - The mandatory parameter is the root path URI in the format `gs://bucket-name`. + - **URI Format**: `gs://bucket-name` + - **Command**: ```shell zenml artifact-store register gs_store -f gcp --path=gs://bucket-name zenml stack register custom_stack -a gs_store ... --set ``` -#### Authentication Methods -Authentication is necessary for using GCS. Options include: +#### Authentication +Authentication is necessary for GCS Artifact Store integration: -- **Implicit Authentication**: Quick local setup using Google Cloud CLI. Requires local credentials but may limit functionality with remote components. - -- **GCP Service Connector (Recommended)**: Provides better security and configuration. Register a service connector: - ```sh +- **Implicit Authentication**: Quick setup using local GCP CLI credentials. Requires Google Cloud CLI installed. +- **GCP Service Connector (Recommended)**: Provides better security and configuration management. Register using: + ```shell zenml service-connector register --type gcp -i ``` Or for a specific bucket: - ```sh + ```shell zenml service-connector register --type gcp --resource-type gcs-bucket --resource-name --auto-configure ``` #### Connecting GCS Artifact Store -After setting up the service connector, connect the GCS Artifact Store: -```sh -zenml artifact-store register -f gcp --path='gs://your-bucket' +After setting up authentication, connect the GCS Artifact Store: +```shell zenml artifact-store connect -i ``` -For non-interactive connection: -```sh +Or non-interactively: +```shell zenml artifact-store connect --connector ``` -#### Using GCP Credentials -You can also use a GCP Service Account Key stored in a ZenML Secret: -1. Create a GCP service account with necessary permissions. -2. Store the key: - ```shell - zenml secret create gcp_secret --token=@path/to/service_account_key.json - ``` -3. Register the GCS Artifact Store with the secret: - ```shell - zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret - ``` +#### Using GCS Artifact Store +Once registered and connected, use the GCS Artifact Store in your ZenML Stack: +```shell +zenml stack register -a ... --set +``` -#### Usage -Once set up, using the GCS Artifact Store is similar to any other Artifact Store in ZenML. +#### GCP Credentials +For enhanced security, create a GCP Service Account Key and store it in a ZenML Secret: +```shell +zenml secret create gcp_secret --token=@path/to/service_account_key.json +zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret +``` + +#### Additional Resources +For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). -For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). +Using the GCS Artifact Store is similar to other Artifact Store flavors, enabling seamless integration into ZenML pipelines. ================================================== === File: docs/book/component-guide/artifact-stores/artifact-stores.md === -# Artifact Stores +# Artifact Stores in ZenML ## Overview -The Artifact Store is a crucial component of the MLOps stack, serving as a persistence layer for artifacts (datasets, models) generated by machine learning pipelines. ZenML automatically serializes and saves these artifacts, enabling features like caching, lineage tracking, and reproducibility. +The Artifact Store is a critical component of the ZenML MLOps stack, serving as a data persistence layer for artifacts generated by machine learning pipelines, such as datasets and models. ZenML automatically serializes and saves these artifacts, enabling features like caching, provenance tracking, and reproducibility. ## Key Points -- **Materializers**: Determine how artifacts are serialized and stored. Most default Materializers use the active Stack's Artifact Store. Custom Materializers can be created for specific storage needs. -- **Storage Options**: The Artifact Store can be extended to support different storage backends beyond the default options. - -## When to Use -The Artifact Store is mandatory in ZenML stacks for storing all artifacts produced by pipeline runs. +- **Artifact Storage**: Artifacts are stored based on the implementation of the associated **Materializer**, which handles serialization and deserialization. +- **Custom Storage**: Users can create custom Materializers or extend the Artifact Store abstraction to support different storage backends. +- **Stack Component**: The Artifact Store must be registered as part of your ZenML stack. ## Artifact Store Flavors -ZenML includes various Artifact Store flavors: +ZenML provides several built-in Artifact Store flavors: | Artifact Store | Flavor | Integration | URI Schema(s) | Notes | |----------------|--------|-------------|----------------|-------| -| Local | local | built-in | None | Default store for local filesystem. | -| Amazon S3 | s3 | s3 | s3:// | Uses AWS S3 for storage. | -| Google Cloud | gcp | gcp | gs:// | Uses Google Cloud Storage. | -| Azure | azure | azure | abfs://, az:// | Uses Azure Blob Storage. | -| Custom | custom | | custom | Extend the Artifact Store abstraction. | +| Local | `local`| _built-in_ | None | Default store for local filesystem. | +| Amazon S3 | `s3` | `s3` | `s3://` | Uses AWS S3 for storage. | +| Google Cloud | `gcp` | `gcp` | `gs://` | Uses Google Cloud Storage. | +| Azure | `azure`| `azure` | `abfs://`, `az://` | Uses Azure Blob Storage. | +| Custom | _custom_| | _custom_ | User-defined implementation. | To list available flavors: ```shell zenml artifact-store flavor list ``` -### Configuration -Each Artifact Store requires a `path` attribute, a URI pointing to the storage root. For example, to register an S3 store: +## Configuration +Each Artifact Store requires a `path` attribute, which is a URI pointing to the root storage location. For example, to register an S3 store: ```shell zenml artifact-store register s3_store -f s3 --path s3://my_bucket ``` ## Usage -Typically, users interact with higher-level APIs to store and retrieve artifacts: -- Return objects from pipeline steps to save them automatically. -- Retrieve artifacts post-pipeline run. +The Artifact Store provides low-level object storage services but can often be used indirectly through higher-level APIs. Key functionalities include: +- Automatically saving pipeline artifacts by returning objects from pipeline steps. +- Retrieving artifacts after pipeline runs. ### Low-Level API -The Artifact Store API resembles a file system, allowing standard file operations. Access it via: -- `zenml.io.fileio`: Low-level utilities for object manipulation (e.g., `open`, `copy`, `remove`). -- `zenml.utils.io_utils`: Higher-level utilities for transferring objects between the Artifact Store and local storage. +The Artifact Store API mimics standard file system operations. Access can be done through: +- `zenml.io.fileio`: For operations like `open`, `copy`, `rename`, etc. +- `zenml.utils.io_utils`: For higher-level utilities to transfer objects between the Artifact Store and local storage. -#### Example: Writing to Artifact Store +### Example Code +**Writing to the Artifact Store:** ```python import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path +artifact_contents = "example artifact" artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") fileio.makedirs(os.path.dirname(artifact_uri)) with fileio.open(artifact_uri, "w") as f: - f.write("example artifact") + f.write(artifact_contents) ``` -#### Example: Reading from Artifact Store +**Reading from the Artifact Store:** ```python from zenml.client import Client from zenml.utils import io_utils @@ -5141,8 +5214,7 @@ artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) ``` -#### Temporary File Operations -For serialization with external libraries: +**Using Temporary Files:** ```python import os import tempfile @@ -5153,12 +5225,12 @@ root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: - # Save to temporary file - # Copy it into artifact store + # Save to temporary file and copy to artifact store fileio.copy(f.name, artifact_uri) ``` -This summary captures the essential details about Artifact Stores in ZenML, including their purpose, configuration, usage, and examples for clarity. +## Conclusion +The Artifact Store is essential for managing artifacts in ZenML, providing flexibility for various storage solutions and seamless integration with pipeline operations. ================================================== @@ -5166,98 +5238,62 @@ This summary captures the essential details about Artifact Stores in ZenML, incl ### Summary: Developing a Custom Artifact Store in ZenML -#### Overview -ZenML provides built-in Artifact Store implementations for local and cloud storage. If you need a different object storage service, you can create a custom Artifact Store by extending ZenML. - -#### Base Abstraction -The `BaseArtifactStore` class is central to ZenML's stack architecture. Key components include: +ZenML provides built-in Artifact Store implementations for local and cloud storage. To create a custom Artifact Store, follow these steps: -1. **Configuration Parameter**: The `path` parameter specifies the root path of the artifact store. -2. **Supported Schemes**: The `SUPPORTED_SCHEMES` class variable must be defined in subclasses to indicate supported file path schemes (e.g., `{"abfs://", "az://"}` for Azure). -3. **Abstract Methods**: Subclasses must implement the following methods: - - `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. +1. **Familiarize with ZenML Components**: Review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) to understand component flavor concepts. -#### Example Implementation -```python -from zenml.enums import StackComponentType -from zenml.stack import StackComponent, StackComponentConfig -from typing import Any, List, Tuple, Type, Union, Iterable, Callable, Optional +2. **Base Abstraction**: The `BaseArtifactStore` class is central to the ZenML stack. Key points include: + - **Configuration**: Requires a `path` parameter for the artifact store's root directory. + - **Supported Schemes**: Each subclass must define `SUPPORTED_SCHEMES` for file path schemes (e.g., `{"abfs://", "az://"}` for Azure). + - **Abstract Methods**: Implement the following methods in subclasses: `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. -PathType = Union[bytes, str] + Example implementation: + ```python + from zenml.enums import StackComponentType + from zenml.stack import StackComponent, StackComponentConfig + from typing import Any, List, Set, Type, Union -class BaseArtifactStoreConfig(StackComponentConfig): - path: str - SUPPORTED_SCHEMES: ClassVar[Set[str]] + PathType = Union[bytes, str] -class BaseArtifactStore(StackComponent): - @abstractmethod - def open(self, name: PathType, mode: str = "r") -> Any: ... - @abstractmethod - def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... - @abstractmethod - def exists(self, path: PathType) -> bool: ... - @abstractmethod - def glob(self, pattern: PathType) -> List[PathType]: ... - @abstractmethod - def isdir(self, path: PathType) -> bool: ... - @abstractmethod - def listdir(self, path: PathType) -> List[PathType]: ... - @abstractmethod - def makedirs(self, path: PathType) -> None: ... - @abstractmethod - def mkdir(self, path: PathType) -> None: ... - @abstractmethod - def remove(self, path: PathType) -> None: ... - @abstractmethod - def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... - @abstractmethod - def rmtree(self, path: PathType) -> None: ... - @abstractmethod - def stat(self, path: PathType) -> Any: ... - @abstractmethod - def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: ... + class BaseArtifactStoreConfig(StackComponentConfig): + path: str + SUPPORTED_SCHEMES: Set[str] -class BaseArtifactStoreFlavor(Flavor): - @property - @abstractmethod - def name(self) -> Type["BaseArtifactStore"]: ... - @property - def type(self) -> StackComponentType: - return StackComponentType.ARTIFACT_STORE - @property - def config_class(self) -> Type[StackComponentConfig]: - return BaseArtifactStoreConfig - @property - @abstractmethod - def implementation_class(self) -> Type["BaseArtifactStore"]: ... -``` + class BaseArtifactStore(StackComponent): + @abstractmethod + def open(self, name: PathType, mode: str = "r") -> Any: pass + @abstractmethod + def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: pass + # Other abstract methods... + ``` -#### Registering Your Custom Artifact Store -To register your custom Artifact Store, follow these steps: -1. Create a class inheriting from `BaseArtifactStore` and implement the abstract methods. -2. Create a class inheriting from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. -3. Inherit from `BaseArtifactStoreFlavor` to combine both classes. +3. **Custom Implementation**: + - Inherit from `BaseArtifactStore` and implement the abstract methods. + - Inherit from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. + - Combine both by inheriting from `BaseArtifactStoreFlavor`. -Register via CLI: -```shell -zenml artifact-store flavor register -``` -Example: -```shell -zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor -``` +4. **Registering the Custom Store**: Use the CLI to register your custom flavor: + ```shell + zenml artifact-store flavor register + ``` + Example: + ```shell + zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor + ``` -#### Important Considerations -- Ensure ZenML is initialized at the root of your repository for proper resolution. -- After registration, list available flavors with: -```shell -zenml artifact-store flavor list -``` +5. **Using the Custom Store**: Once registered, it will be available in the list of flavors: + ```shell + zenml artifact-store flavor list + ``` -#### Enabling Visualizations -For visualizations to work with your custom Artifact Store, ensure it can authenticate to the backend without relying on local environment settings. Install necessary dependencies in the deployed environment. +6. **Workflow Integration**: + - The `CustomArtifactStoreFlavor` is used during flavor creation. + - The `CustomArtifactStoreConfig` validates user inputs during stack registration. + - The `CustomArtifactStore` is utilized when the component is in use. -For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). +7. **Artifact Visualizations**: Ensure your custom store can authenticate to the backend without local dependencies. Install necessary package dependencies in the deployment environment for visualization support. + +For complete implementation details and additional documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). ================================================== @@ -5265,75 +5301,71 @@ For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/late ### Feature Stores -Feature stores enable data teams to manage data through both offline and online low-latency stores, ensuring synchronization between them. They provide a centralized registry for features and their schemas, catering to different access needs for batch and real-time data. Feast addresses the issue of train-serve skew, where training and serving data diverge. - -### When to Use It +Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between the two. They provide a centralized registry for features and feature schemas, catering to different access needs for batch and real-time data, thereby addressing the issue of train-serve skew. -Feature stores are optional components in the ZenML Stack, primarily used for: - -- Productionalizing new features -- Reusing existing features across pipelines and models -- Ensuring consistency between training and serving data -- Providing a central registry of features and schemas +#### When to Use -### Available Feature Stores +Feature stores are optional in the ZenML Stack and are used to: +- Productionalize new features +- Reuse existing features across pipelines and models +- Ensure consistency between training and serving data +- Maintain a central registry of features and schemas -ZenML integrates with various feature stores, including: +#### Available Feature Stores -| Feature Store | Flavor | Integration | Notes | -|----------------------------|---------|-------------|--------------------------------------------| +ZenML integrates with various feature stores, notably: +| Feature Store | Flavor | Integration | Notes | +|-----------------------------|---------|-------------|--------------------------------------------| | [FeastFeatureStore](feast.md) | `feast` | `feast` | Connects ZenML with existing Feast | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom feature store implementations | To view available feature store flavors, use: - ```shell zenml feature-store flavor list ``` -### How to Use It +#### How to Use -The feature store implementation is based on the Feast integration. For usage details, refer to the [Feast documentation](feast.md#how-do-you-use-it). +The feature store implementation is based on the Feast integration. For detailed usage, refer to the [Feast documentation](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === -### Summary: Managing Data in Feast Feature Stores +### Summary of Managing Data in Feast Feature Stores **Feast Overview** -Feast (Feature Store) is an operational data system designed for managing and serving machine learning features to production models. It supports both low-latency online stores for real-time predictions and offline stores for batch scoring or model training. +Feast (Feature Store) is designed for managing and serving machine learning features to production models, supporting both low-latency online and offline batch data access. **Use Cases** -Feast enables: -- Access to offline/batch data for model training. -- Access to online data during inference. +- **Training:** Access offline/batch data for model training. +- **Inference:** Access online data for real-time predictions. **Deployment** -To deploy Feast with ZenML: -1. Ensure you have a Feast feature store. If not, refer to the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store). -2. Install the Feast integration in ZenML: - ```shell - zenml integration install feast - ``` -3. Register the feature store as a ZenML stack component: - ```shell - zenml feature-store register feast_store --flavor=feast --feast_repo="" - zenml stack register ... -f feast_store - ``` +To integrate Feast with ZenML, ensure you have a Feast feature store set up. Install the Feast integration with: + +```shell +zenml integration install feast +``` + +Register the feature store as a ZenML stack component: + +```shell +zenml feature-store register feast_store --flavor=feast --feast_repo="" +zenml stack register ... -f feast_store +``` **Usage** -To retrieve features from a registered feature store, create a step that interfaces with it: +Currently, online data retrieval is supported in local settings but not in production deployments. To get historical features from a registered feature store, create a step as follows: + ```python from datetime import datetime -from typing import Any, Dict, List, Union import pandas as pd from zenml import step from zenml.client import Client @step -def get_historical_features(entity_dict: Union[Dict[str, Any], str], features: List[str], full_feature_names: bool = False) -> pd.DataFrame: - """Fetch historical features from Feast.""" +def get_historical_features(entity_dict, features, full_feature_names=False) -> pd.DataFrame: feature_store = Client().active_stack.feature_store if not feature_store: raise DoesNotExistException("Feast feature store component is not available.") @@ -5345,7 +5377,6 @@ def get_historical_features(entity_dict: Union[Dict[str, Any], str], features: L entity_dict = { "driver_id": [1001, 1002, 1003], - "label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), @@ -5356,7 +5387,6 @@ entity_dict = { features = [ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", - "driver_hourly_stats:avg_daily_trips", ] @pipeline @@ -5366,85 +5396,81 @@ def my_pipeline(): ``` **Important Notes** -- Online data retrieval is currently unsupported in deployed models. -- ZenML's use of Pydantic limits input types to basic data types; complex types like `DataFrame` or `datetime` require conversion. - -For more details on configurable attributes of the Feast feature store, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). +- ZenML uses Pydantic for input serialization, limiting it to basic data types. DataFrames and datetime values require conversion. +- For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== === File: docs/book/component-guide/feature-stores/custom.md === -### Develop a Custom Feature Store - -**Overview**: Feature stores enable data teams to provide data through both an offline store and an online low-latency store, ensuring synchronization between them. They also serve as a centralized registry for features and feature schemas within a team or organization. - -**Prerequisites**: Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. +### Summary: Developing a Custom Feature Store in ZenML -**Current Status**: The base abstraction for feature stores is under development and not yet available for extension. For immediate use, refer to the list of existing feature stores. +**Overview**: Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between them. They also provide a centralized registry for features and feature schemas for team or organizational use. -**Important Note**: -- **Base Abstraction in Progress**: Extension of feature stores is currently not possible. +**Important Notes**: +- Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. +- The base abstraction for feature stores is currently under development, limiting the ability to extend them. Check the list of available feature stores for immediate use. -![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +**Warning**: This documentation is based on an older version of ZenML. For the latest information, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/annotators/annotators.md === -### Annotators in ZenML +# Annotators in ZenML -**Overview**: -Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They enable users to launch annotation processes, configure datasets, and track labeled tasks. Data annotation is essential in MLOps, and ZenML aims to support iterative annotation workflows that integrate labeling into the ML lifecycle. +## Overview +Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They enable users to launch annotation tasks, configure datasets, and track labeled tasks via CLI commands. Data annotation is essential in MLOps, and ZenML aims to support iterative workflows that integrate annotators into the ML process. -**Key Use Cases**: -1. **Initial Labeling**: Start labeling data to bootstrap models, iterating between labeling and model training to refine definitions and standards. -2. **Ongoing Data**: Regularly check and label new incoming data, while considering automation for data drift detection. -3. **Inference Samples**: Store and label data from model predictions to compare with actual labels, aiding in model retraining. -4. **Ad Hoc Annotation**: Identify and annotate challenging examples or correct bad labels, especially in cases of class imbalance. +## Annotation Lifecycle +Data annotation can occur at various stages in the ML lifecycle: +- **At the Start**: Begin labeling data to bootstrap models, iterating by using model predictions to suggest labels. +- **As New Data Arrives**: Regularly check and label new data to maintain model accuracy and address data drift. +- **Inference Samples**: Store and label predictions from the model for comparison and potential retraining. +- **Ad Hoc Interventions**: Identify and correct bad labels or address class imbalances through targeted annotation. -**Core Features**: +## Usage +The annotator is an optional component in the ZenML stack, designed to integrate with training and deployment phases. Key features include: - Seamless integration of labels in training steps. - Versioning of annotation data. - Conversion of annotation data to/from custom formats. -- Generation of UI config files for tools like Label Studio. +- Generation of UI config files for annotation interfaces. -**Available Annotators**: +## Available Annotators ZenML supports various annotators through integrations: -| Annotator | Flavor | Integration | Notes | -|-------------------------|---------------|------------------|-----------------------------------------| -| ArgillaAnnotator | `argilla` | `argilla` | Connects ZenML with Argilla | -| LabelStudioAnnotator | `label_studio`| `label_studio` | Connects ZenML with Label Studio | -| PigeonAnnotator | `pigeon` | `pigeon` | Limited to Jupyter notebooks for image/text classification | -| ProdigyAnnotator | `prodigy` | `prodigy` | Connects ZenML with Prodigy | -| Custom Implementation | _custom_ | | Extend the annotator abstraction | - -**Command to List Annotator Flavors**: +| Annotator | Flavor | Integration | Notes | +|---------------------------|----------------|-------------------|--------------------------------------------| +| [ArgillaAnnotator](argilla.md) | `argilla` | `argilla` | Connect ZenML with Argilla | +| [LabelStudioAnnotator](label-studio.md) | `label_studio` | `label_studio` | Connect ZenML with Label Studio | +| [PigeonAnnotator](pigeon.md) | `pigeon` | `pigeon` | Notebook only; for image/text classification | +| [ProdigyAnnotator](prodigy.md) | `prodigy` | `prodigy` | Connect ZenML with [Prodigy](https://prodi.gy/) | +| [Custom Implementation](custom.md) | _custom_ | | Extend the annotator abstraction | + +To view available annotator flavors, use: ```shell zenml annotator flavor list ``` -**Usage**: -The annotator implementation is primarily based on the Label Studio integration. For detailed usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). Note that Pigeon has limited functionality. +## Implementation +The annotator implementation is primarily based on the Label Studio integration. For usage details, refer to the [Label Studio page](label-studio.md#how-do-you-use-it). Note that Pigeon is limited to Jupyter notebooks. -**Terminology**: -- ZenML uses "Dataset" to refer to a grouping of annotations/tasks, aligning with most tools, while Label Studio uses "Project." -- The unit of "an annotation + source data" is termed "tasks" in ZenML, consistent with Label Studio. +## Naming Conventions +ZenML standardizes terminology for its components: +- **Project vs. Dataset**: Label Studio uses 'Project'; ZenML uses 'Dataset'. +- **Tasks**: The combination of an annotation and source data is referred to as 'tasks' in ZenML. -This concise overview captures the essential information about annotators in ZenML, ensuring clarity on their purpose, use cases, features, and available integrations. +This documentation provides a concise overview of the annotators in ZenML, their lifecycle, usage, available integrations, and naming conventions. ================================================== === File: docs/book/component-guide/annotators/prodigy.md === -### Prodigy Annotation Tool Overview +### Prodigy Integration with ZenML -**Prodigy** is a paid annotation tool designed for creating training and evaluation data for machine learning models. It aids in data inspection, cleaning, error analysis, and developing rule-based systems. +**Prodigy** is a paid annotation tool for creating training and evaluation data for machine learning models. It allows for data inspection, cleaning, error analysis, and developing rule-based systems. The Prodigy Python library offers pre-built workflows and customizable scripts for data loading, annotation interface questions, and front-end behavior. -#### Key Features -- **Integration with ZenML**: Prodigy can be integrated into the ZenML stack for ML workflows. -- **Custom Workflows**: Users can create custom scripts to load/save data, modify annotation questions, and customize the front-end using HTML/JavaScript. -- **Fast Annotation**: The web application is optimized for efficient data annotation. +#### When to Use Prodigy +Consider using Prodigy when you need to label data as part of your ML workflow by adding it as an optional annotator stack component in ZenML. #### Deployment Steps 1. **Install Prodigy**: Requires a license. Follow the [Prodigy installation guide](https://prodi.gy/docs/install). Ensure `urllib3<2` is installed. @@ -5453,30 +5479,31 @@ This concise overview captures the essential information about annotators in Zen zenml integration export-requirements --output-file prodigy-requirements.txt prodigy zenml annotator register prodigy --flavor prodigy ``` - Optionally, specify a custom config path: - ```shell - # --custom_config_path="" - ``` + Optionally, use `--custom_config_path=""` to override default settings. -3. **Update ZenML Stack**: +3. **Set Up the Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` - -#### Usage -- **Accessing Datasets**: + Verify with: ```shell zenml annotator dataset list ``` -- **Annotating a Dataset**: - ```shell - zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" - ``` -#### Importing Annotations -To import annotations into a ZenML step: +#### Usage +Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy). Access and annotate datasets with: +```shell +zenml annotator dataset annotate +``` +Example command: +```shell +zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" +``` + +#### Importing Annotations in ZenML +To import annotations within a ZenML step: ```python from typing import List, Dict, Any from zenml import step @@ -5489,40 +5516,42 @@ def import_annotations() -> List[Dict[str, Any]]: return annotations ``` -#### Prodigy Annotator Component -The Prodigy annotator component extends the `BaseAnnotator` class, requiring methods for dataset registration and annotation export. It supports core Prodigy functionalities, including dataset registration and annotation export for use in ZenML steps. +For cloud environments, manually export annotations and store them for later use in ZenML. -For further details, refer to the [Prodigy documentation](https://prodi.gy/docs). +#### Prodigy Annotator Stack Component +The Prodigy annotator component extends the `BaseAnnotator` class, implementing core methods for dataset registration and annotation export. It includes additional methods specific to Prodigy for enhanced functionality. ================================================== === File: docs/book/component-guide/annotators/label-studio.md === -### Label Studio Overview -Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types: +### Summary of Label Studio Integration with ZenML + +**Label Studio Overview** +Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types including: - **Computer Vision**: image classification, object detection, semantic segmentation - **Audio & Speech**: classification, speaker diarization, emotion recognition, transcription - **Text/NLP**: classification, NER, question answering, sentiment analysis - **Time Series**: classification, segmentation, event recognition -- **Multi-Modal/Domain**: dialogue processing, OCR, time series with reference +- **Multi-Modal**: dialogue processing, OCR, time series with reference -### Use Case -Label Studio can be integrated into ML workflows for data labeling. It is compatible with cloud artifact stores like AWS S3, GCP/GCS, and Azure Blob Storage. Local stacks are not supported for the annotation component. +**Usage Context** +Integrate Label Studio into your ZenML stack for data labeling during ML workflows. It supports AWS S3, GCP/GCS, and Azure Blob Storage, but not purely local stacks. -### Deployment Steps -1. **Install Label Studio Integration**: +**Deployment Steps** +1. **Install the Integration**: ```shell zenml integration install label_studio ``` - -2. **Obtain API Key**: - Clone the repository and start Label Studio: + +2. **Set Up Label Studio**: + - Clone and run Label Studio locally: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` - Access the web interface at [http://localhost:8080/](http://localhost:8080/) to get your API key. + - Access the web interface at [http://localhost:8080/](http://localhost:8080/) to obtain your API key. 3. **Register API Key**: ```shell @@ -5534,7 +5563,7 @@ Label Studio can be integrated into ML workflows for data labeling. It is compat zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` -5. **Update Stack**: +5. **Configure Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -a @@ -5542,50 +5571,51 @@ Label Studio can be integrated into ML workflows for data labeling. It is compat zenml stack set annotation ``` -### Usage -After setup, use the CLI commands for dataset management: +**Usage** +Use CLI commands to interact with datasets: - List datasets: - ```shell - zenml annotator dataset list - ``` -- Annotate a dataset: - ```shell - zenml annotator dataset annotate - ``` + ```shell + zenml annotator dataset list + ``` +- Annotate a dataset: + ```shell + zenml annotator dataset annotate + ``` -### Key Components -- **Label Studio Annotator**: Inherits from `BaseAnnotator`, with methods for dataset registration, annotation export, and starting the annotator daemon. - +**Key Components** +- **Label Studio Annotator**: Inherits from `BaseAnnotator`, includes methods for dataset registration, annotation export, and daemon process management. - **Standard Steps**: - - `LabelStudioDatasetRegistrationConfig`: Config for dataset registration. - - `LabelStudioDatasetSyncConfig`: Config for syncing new data. + - `LabelStudioDatasetRegistrationConfig`: For dataset registration. + - `LabelStudioDatasetSyncConfig`: For syncing new data. - `get_or_create_dataset`: Registers or retrieves a dataset. - - `get_labeled_data`: Retrieves labeled data in Label Studio format. - - `sync_new_data_to_label_studio`: Ensures data and annotations are synced. + - `get_labeled_data`: Retrieves labeled data. + - `sync_new_data_to_label_studio`: Ensures data synchronization. -### Helper Functions -Label Studio requires 'label config' for dataset registration, which can be generated using ZenML's helper functions for object detection, image classification, and OCR. +**Helper Functions** +ZenML provides functions to generate 'label config' strings for object detection, image classification, and OCR. Refer to the `label_config_generators` module for implementation details. -For more details, refer to the [Label Studio documentation](https://labelstud.io/guide/tasks.html) and the [ZenML GitHub repository](https://github.com/zenml-io/zenml). +For more information, refer to the [Hugging Face deployment documentation](https://huggingface.co/docs/hub/spaces-sdks-docker-label-studio) and the [Label Studio guide](https://labelstud.io/guide/tasks.html). ================================================== === File: docs/book/component-guide/annotators/argilla.md === -### Argilla Overview -Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It enhances data curation through human and machine feedback, supporting the entire MLOps cycle from data labeling to model monitoring. Its unique focus on human-in-the-loop approaches sets it apart from competitors. +### Summary: Annotating Data Using Argilla -### Use Cases -Argilla is ideal for labeling textual data in ML workflows. It can be integrated into a ZenML stack, supporting annotation at various stages. +**Argilla Overview** +Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It facilitates robust language model development through efficient data curation, leveraging both human and machine feedback throughout the MLOps cycle, from data labeling to model monitoring. -### Deployment -To deploy Argilla, install the ZenML integration: +**Use Cases** +Argilla is beneficial when labeling textual data in your ML workflow. It can be integrated into a ZenML stack for annotation at various stages. + +**Deployment** +To deploy Argilla, install the ZenML Argilla integration: ```shell zenml integration install argilla ``` -You can register your API key directly or as a secret for security. For secret registration: +You can register the API key directly or as a secret for security. To register as a secret: ```shell zenml secret create argilla_secrets --api_key="" @@ -5597,7 +5627,7 @@ Then, register the annotator: zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` -For a deployed instance, specify the instance URL without a trailing `/` and include headers for private Hugging Face Spaces: +For a deployed instance, specify the instance URL without a trailing `/`. If using a private Hugging Face Spaces instance, include the `headers` parameter with your token: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' @@ -5611,23 +5641,23 @@ zenml stack update annotation -an zenml stack set annotation ``` -Verify with: +Verify the setup with: ```shell zenml annotator dataset list ``` -### Usage -Access data and annotations using the CLI: +**Usage** +Access data and annotations via the CLI: - List datasets: `zenml annotator dataset list` - Annotate a dataset: `zenml annotator dataset annotate ` -### Argilla Annotator Component -The Argilla annotator extends the `BaseAnnotator` class, implementing core methods for dataset registration and state management. It supports dataset registration, annotation export, and requires a running server for the web interface. +**Argilla Annotator Component** +The Argilla annotator inherits from `BaseAnnotator`, requiring core methods for dataset registration and retrieval. It supports dataset registration, annotation export, and starting the annotator daemon process. -### Argilla Annotator SDK -For SDK usage in Python: +**Argilla Annotator SDK** +To use the SDK in Python: ```python from zenml.client import Client @@ -5651,114 +5681,121 @@ For more details, refer to the [Argilla documentation](https://docs.argilla.io/e === File: docs/book/component-guide/annotators/pigeon.md === -### Pigeon Annotation Tool +# Pigeon: Data Annotation Tool + +Pigeon is an open-source annotation tool for labeling data within Jupyter notebooks, supporting: -**Overview**: -Pigeon is a lightweight, open-source annotation tool for labeling data directly within Jupyter notebooks. It supports: - Text Classification - Image Classification - Text Captioning -**Use Cases**: -Ideal for small to medium-sized datasets in ML workflows, Pigeon is useful for: -- Quick labeling tasks -- Iterative labeling during exploratory phases -- Collaborative labeling in Jupyter notebooks +## Use Cases +Pigeon is ideal for: +- Labeling small to medium datasets in ML workflows. +- Quick labeling tasks without a full annotation platform. +- Iterative and collaborative labeling during the exploratory phase. -**Deployment Steps**: -1. Install the ZenML Pigeon integration: +## Deployment Steps +1. **Install Pigeon Integration**: ```shell zenml integration install pigeon ``` -2. Register the Pigeon annotator, specifying the output directory: + +2. **Register the Annotator**: ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` -3. Update your stack to include the Pigeon annotator: + +3. **Update Your Stack**: ```shell zenml stack update --annotator pigeon ``` -**Usage**: -- **Text Classification**: - ```python - from zenml.client import Client +## Usage +Access the Pigeon annotator in your Jupyter notebook: - annotator = Client().active_stack.annotator - annotations = annotator.annotate( - data=['I love this movie', 'I was really disappointed by the book'], - options=['positive', 'negative'] - ) - ``` +### For Text Classification: +```python +from zenml.client import Client -- **Image Classification**: - ```python - from zenml.client import Client - from IPython.display import display, Image +annotator = Client().active_stack.annotator +annotations = annotator.annotate( + data=['I love this movie', 'I was really disappointed by the book'], + options=['positive', 'negative'] +) +``` - annotator = Client().active_stack.annotator - annotations = annotator.annotate( - data=['/path/to/image1.png', '/path/to/image2.png'], - options=['cat', 'dog'], - display_fn=lambda filename: display(Image(filename)) - ) - ``` +### For Image Classification: +```python +from zenml.client import Client +from IPython.display import display, Image + +annotator = Client().active_stack.annotator +annotations = annotator.annotate( + data=['/path/to/image1.png', '/path/to/image2.png'], + options=['cat', 'dog'], + display_fn=lambda filename: display(Image(filename)) +) +``` -**Annotation Management**: +### Dataset Management Commands: - List datasets: `zenml annotator dataset list` - Delete a dataset: `zenml annotator dataset delete ` - Get dataset stats: `zenml annotator dataset stats ` -**Output**: -Annotations are saved as JSON files in the specified output directory, with filenames corresponding to dataset names. +Annotations are saved as JSON files in the specified output directory, with filenames as dataset names. -**Acknowledgements**: -Pigeon was created by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License. +## Acknowledgements +Pigeon was developed by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. ================================================== === File: docs/book/component-guide/annotators/custom.md === -### Develop a Custom Annotator +# Develop a Custom Annotator -**Overview**: Custom annotators are stack components in ZenML that facilitate data annotation within your pipelines. Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. +**Warning:** This is an older version of the ZenML documentation. For the latest version, visit [this up-to-date URL](https://docs.zenml.io). -**Functionality**: Annotators allow you to launch annotation tasks via CLI, configure datasets, and retrieve statistics on labeled tasks. +## Overview +Custom annotators are stack components in ZenML that facilitate data annotation within your pipelines. You can use the CLI to launch annotation, configure datasets, and retrieve statistics on labeled tasks. -**Current Status**: The base abstraction for annotators is under development, and extension is currently not supported. Users should refer to the list of existing feature stores for available annotators. +**Note:** The base abstraction for annotators is currently in development, and extension is not yet possible. For immediate use, refer to the list of available feature stores. -**Note**: Keep an eye out for updates on the base abstraction. +## Additional Resources +Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. ================================================== === File: docs/book/component-guide/model-deployers/vllm.md === -### vLLM: Deploying Your LLM Locally +### Deploying LLM Locally with vLLM -**Overview**: vLLM is a library designed for efficient LLM inference and serving, offering features like continuous batching, quantization (GPTQ, AWQ, INT4, INT8, FP8), PagedAttention, speculative decoding, and chunked pre-fill. +**vLLM Overview** +[vLLM](https://docs.vllm.ai/en/latest/) is a library designed for efficient LLM inference and serving, offering features such as: +- High throughput with OpenAI-compatible API server +- Continuous request batching +- Quantization options: GPTQ, AWQ, INT4, INT8, FP8 +- Advanced features: PagedAttention, Speculative decoding, Chunked pre-fill -#### When to Use vLLM -- Deploy large language models with high throughput. -- Create an OpenAI-compatible API server. - -#### Deployment Steps -1. **Install vLLM Integration**: +**Deployment Steps** +1. **Install vLLM Integration** + Run the following command to install the vLLM integration for ZenML: ```bash zenml integration install vllm -y ``` -2. **Register vLLM Model Deployer**: +2. **Register the Model Deployer** + Register the vLLM model deployer with ZenML: ```bash zenml model-deployer register vllm_deployer --flavor=vllm ``` + This sets up a local vLLM server as a daemon process for serving models. -This sets up a local vLLM deployment server as a background daemon. +**Usage Example** +To see vLLM in action, refer to the [deployment pipeline example](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25). -#### Usage Example -To see a deployment pipeline in action, refer to the [deployment pipeline example](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25). - -**Deploy an LLM**: -Use the `vllm_model_deployer_step` to create a `VLLMDeploymentService`. Here’s a concise example: +**Deploying an LLM** +Use the `vllm_model_deployer_step` to deploy a model in your pipeline. Here’s a concise example: ```python from zenml import pipeline @@ -5771,82 +5808,87 @@ def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeplo return vllm_model_deployer_step(model=model, timeout=timeout) ``` -Refer to this [example](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer) for running a GPT-2 model with vLLM. +**Configuration Options** +Within the `VLLMDeploymentService`, you can configure: +- `model`: Hugging Face model name or path +- `tokenizer`: Hugging Face tokenizer name or path (defaults to model name) +- `served_model_name`: API model name (defaults to model name) +- `trust_remote_code`: Trust remote code from Hugging Face +- `tokenizer_mode`: Options: ['auto', 'slow', 'mistral'] +- `dtype`: Data type for model weights (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']) +- `revision`: Specific model version (branch name, tag, or commit ID; defaults to latest) -#### Configuration Options -Within `VLLMDeploymentService`, you can configure: -- `model`: Hugging Face model name/path. -- `tokenizer`: Hugging Face tokenizer name/path (defaults to model if unspecified). -- `served_model_name`: API model name (defaults to `model`). -- `trust_remote_code`: Trust remote code from Hugging Face. -- `tokenizer_mode`: Options: ['auto', 'slow', 'mistral']. -- `dtype`: Data type for weights/activations (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']). -- `revision`: Specific model version (branch/tag/commit ID; defaults to latest). +For further details, refer to the [vLLM GitHub repository](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer). ================================================== === File: docs/book/component-guide/model-deployers/huggingface.md === -### Summary of Hugging Face Inference Endpoints Documentation +### Summary of Hugging Face Inference Endpoints Deployment Documentation -**Overview:** -Hugging Face Inference Endpoints enable secure and scalable deployment of `transformers`, `sentence-transformers`, and `diffusers` models on managed infrastructure, eliminating the need for container and GPU management. +**Overview**: Hugging Face Inference Endpoints allow for secure, production-ready deployment of `transformers`, `sentence-transformers`, and `diffusers` models on managed infrastructure, eliminating the need for containers and GPUs. -**When to Use:** +**When to Use**: - Deploy models on dedicated, secure infrastructure. -- Prefer a fully-managed production solution for inference. -- Aim to create production-ready APIs with minimal MLOps involvement. -- Seek cost-effective solutions, paying only for raw compute resources. -- Require enterprise security with offline endpoints connected to Virtual Private Clouds (VPCs). +- Require a fully-managed production solution with minimal MLOps involvement. +- Need cost-effective deployment, paying only for raw compute resources. +- Prioritize enterprise security with offline endpoints connected to Virtual Private Clouds (VPCs). -**Deployment Steps:** -1. **Install Hugging Face ZenML Integration:** - ```bash - zenml integration install huggingface -y - ``` +**Installation**: +To deploy models, install the Hugging Face ZenML integration: +```bash +zenml integration install huggingface -y +``` -2. **Register the Model Deployer:** - ```bash - zenml model-deployer register --flavor=huggingface --token= --namespace= - ``` - - `token`: Hugging Face authentication token. - - `namespace`: Username or organization name for inference endpoints. +**Registering the Model Deployer**: +Register the Hugging Face model deployer: +```bash +zenml model-deployer register --flavor=huggingface --token= --namespace= +``` +- `token`: Hugging Face authentication token. +- `namespace`: User or organization name for inference endpoints. -3. **Update Stack:** - ```bash - zenml stack update --model-deployer= - ``` +**Updating the Stack**: +Integrate the model deployer into your ZenML stack: +```bash +zenml stack update --model-deployer= +``` -**Using the Model Deployer:** -- Deploy models using the pre-built `huggingface_model_deployer_step`. -- Run batch inference with `HuggingFaceDeploymentService`. +**Usage**: +Two main methods to utilize the Hugging Face model deployer: +1. **Deploying a Model**: Use the `huggingface_model_deployer_step` in your pipeline. +2. **Running Inference**: Utilize `HuggingFaceDeploymentService` for batch inference. -**Example Deployment Pipeline:** +**Example of Model Deployment**: ```python from zenml import pipeline from zenml.config import DockerSettings from zenml.integrations.huggingface.services import HuggingFaceServiceConfig from zenml.integrations.huggingface.steps import huggingface_model_deployer_step -@pipeline(enable_cache=True) +docker_settings = DockerSettings(required_integrations=[HUGGINGFACE]) + +@pipeline(enable_cache=True, settings={"docker": docker_settings}) def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): service_config = HuggingFaceServiceConfig(model_name=model_name) huggingface_model_deployer_step(service_config=service_config, timeout=timeout) ``` -**Configurable Attributes in `HuggingFaceServiceConfig`:** + +**Configurable Attributes**: - `model_name`, `endpoint_name`, `repository`, `framework`, `accelerator`, `instance_size`, `instance_type`, `region`, `vendor`, `token`, `account_id`, `min_replica`, `max_replica`, `revision`, `task`, `custom_image`, `namespace`, `endpoint_type`. -**Running Inference Example:** +**Running Inference Example**: ```python from zenml import step, pipeline +from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer from zenml.integrations.huggingface.services import HuggingFaceDeploymentService -@step -def prediction_service_loader(pipeline_name: str, pipeline_step_name: str) -> HuggingFaceDeploymentService: +@step(enable_cache=False) +def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) + existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) if not existing_services: - raise RuntimeError("No running service found.") + raise RuntimeError(f"No inference endpoint found.") return existing_services[0] @step @@ -5854,66 +5896,60 @@ def predictor(service: HuggingFaceDeploymentService, data: str) -> str: return service.predict(data) @pipeline -def huggingface_deployment_inference_pipeline(pipeline_name: str): - model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name) +def huggingface_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step"): + inference_data = ... + model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). +For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). ================================================== === File: docs/book/component-guide/model-deployers/databricks.md === -### Summary: Deploying Models to Databricks Inference Endpoints +### Summary: Deploying Models to Databricks Inference Endpoints with ZenML **Overview:** -Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs. It offers managed infrastructure, allowing users to deploy models without managing containers or GPUs. +Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs, without managing containers or GPUs. It offers dedicated, autoscaling infrastructure managed by Databricks. **When to Use Databricks Model Deployer:** -- You are utilizing Databricks for data and ML workloads. -- You prefer not to manage containers and GPUs. -- You need dedicated, autoscaling infrastructure. -- Enterprise security is a priority, requiring secure offline endpoints. +- You are using Databricks for data and ML workloads. +- You want to deploy models without container management. +- You need enterprise security for offline endpoints. - You aim to create production-ready APIs with minimal MLOps involvement. **Installation:** -To deploy models, install the Databricks ZenML integration: - +To use the Databricks Model Deployer, install the ZenML Databricks integration: ```bash zenml integration install databricks -y ``` **Registering the Model Deployer:** -Register the Databricks model deployer with ZenML: - +Register the Databricks model deployer: ```bash zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` +*Note: Create a Databricks service account for permissions and generate `client_id` and `client_secret` for authentication.* -**Service Account Recommendation:** -Create a Databricks service account with necessary permissions for job creation and execution. Generate `client_id` and `client_secret` for authentication. - -**Updating Stack:** -To use the model deployer in your stack: - +**Update Stack:** +Update your ZenML stack to include the model deployer: ```bash zenml stack update --model-deployer= ``` **Configuration Options:** -Within `DatabricksServiceConfig`, configure: -- `model_name`: Name of the model in the Databricks Model Registry. +In `DatabricksServiceConfig`, configure: +- `model_name`: Name of the model in Databricks Model Registry. - `model_version`: Version of the model. -- `workload_size`: Size of the workload (`Small`, `Medium`, `Large`). -- `scale_to_zero_enabled`: Enable/disable scale to zero feature. -- `env_vars`: Environment variables for the model serving container. -- `workload_type`: Type of workload (`CPU`, `GPU_LARGE`, etc.). -- `endpoint_secret_name`: Secret name for securing the endpoint. +- `workload_size`: Size options: `Small`, `Medium`, `Large`. +- `scale_to_zero_enabled`: Enable/disable scale to zero. +- `env_vars`: Environment variables for the model. +- `workload_type`: Options: `CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, `MULTIGPU_MEDIUM`. +- `endpoint_secret_name`: Secret for securing the endpoint. **Running Inference:** Example code to run inference on a provisioned endpoint: - ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer @@ -5922,9 +5958,9 @@ from zenml.integrations.databricks.services import DatabricksDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> DatabricksDeploymentService: model_deployer = DatabricksModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) + existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name, running) if not existing_services: - raise RuntimeError(f"No running Databricks inference endpoint found.") + raise RuntimeError(f"No running inference endpoint found for '{model_name}'.") return existing_services[0] @step @@ -5934,38 +5970,39 @@ def predictor(service: DatabricksDeploymentService, data: str) -> str: @pipeline def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "databricks_model_deployer_step"): inference_data = ... - model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) + model_deployment_service = prediction_service_loader(pipeline_name, pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` -For more details on configuration and usage, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). +For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === -### Model Deployers Overview +# Model Deployers -**Model Deployment** is the process of making machine learning models available for predictions on real-world data. There are two primary types of predictions: -- **Batch Prediction**: Generates predictions for large datasets at once. -- **Real-Time Prediction**: Generates predictions for individual data points. +Model deployment involves making machine learning models available for predictions on real-world data. Predictions can be made in two ways: batch (for large datasets) and real-time (for single data points). Model deployers serve models either in real-time or batch mode through managed web services accessible via APIs (HTTP or GRPC). -**Model Deployers** are components in the ZenML stack responsible for serving models in real-time or batch modes. They facilitate online serving via managed web services with API endpoints (HTTP or GRPC) and enable batch inference for large data sets, typically storing predictions in files or databases. +## Use Cases +Model deployers are optional components in the ZenML stack, primarily used for deploying models in development or production environments (local, Kubernetes, or cloud). They facilitate continuous training and deployment pipelines. -### Use Cases -Model deployers are optional in the ZenML stack and can be used for deploying models in local or production environments (Kubernetes or cloud). They are primarily utilized for real-time inference, allowing the construction of pipelines for continuous training and deployment. +## Architecture +Model deployers fit into the ZenML stack, enabling efficient model management across various environments. -### Model Deployer Flavors -ZenML supports various model deployers: +### Available Model Deployer Flavors +ZenML provides several model deployers: - **MLflow**: Local deployment. - **BentoML**: Local or production-grade deployment. - **Seldon Core**: Kubernetes-based production deployment. - **Hugging Face**: Deployment on Hugging Face Inference Endpoints. -- **Databricks**: Deployment to Databricks Inference Endpoints. -- **vLLM**: Local deployment of LLMs. -- **Custom Implementation**: User-defined deployment solutions. +- **Databricks**: Deployment on Databricks Inference Endpoints. +- **vLLM**: Local LLM deployment. +- **Custom Implementation**: Extendable for custom deployments. + +### Configuration Example +Model deployers require specific attributes for configuration. Here’s how to configure MLflow and Seldon Core: -**Configuration Example**: ```shell # Configure MLflow model deployer zenml model-deployer register mlflow --flavor=mlflow @@ -5973,22 +6010,45 @@ zenml model-deployer register mlflow --flavor=mlflow # Configure Seldon Core model deployer zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ ---base_url=http://example-url.com +--base_url=http:// ``` ### Role in ZenML Stack -- **Seamless Deployment**: Facilitates model deployment across various environments, managing configuration attributes for interaction with serving tools. -- **Lifecycle Management**: Offers methods for managing model servers, including starting, stopping, and deleting servers, as well as updating models. +- **Seamless Deployment**: Deploy models to various environments while managing configuration attributes. +- **Lifecycle Management**: Manage model servers (start, stop, delete, update) efficiently. -**Core Methods**: +### Core Methods - `deploy_model`: Deploys a model and returns a Service object. - `find_model_server`: Lists deployed model servers. - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage server states. -**Service Object**: Represents a deployed model server, containing `config` (deployment attributes) and `status` (operational status). +### Service Object +Represents a deployed model server, containing: +- `config`: Deployment configuration. +- `status`: Operational status (e.g., prediction URL). + +### Interaction Example +To interact with the model deployer: + +```python +from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer + +model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() +services = model_deployer.find_model_server(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B") + +if services: + if services[0].is_running: + print(f"Model server {services[0].config['model_name']} is running at {services[0].status['prediction_url']}") + else: + model_deployer.start_model_server(services[0]) +else: + service = model_deployer.deploy_model(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B", model_uri="s3://", ...) + print(f"Model server {service.config['model_name']} is deployed at {service.status['prediction_url']}") +``` + +### CLI Interaction +Use the CLI to manage model servers: -### Interacting with Model Deployer -After deployment, model deployers can be managed via CLI: ```shell $ zenml model-deployer models list $ zenml model-deployer models describe @@ -5996,7 +6056,9 @@ $ zenml model-deployer models get-url $ zenml model-deployer models delete ``` -In Python, retrieve the prediction URL: +### Python Metadata Access +Access the prediction URL via Python: + ```python from zenml.client import Client @@ -6005,8 +6067,7 @@ deployer_step = pipeline_run.steps[""] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` -### Continuous Deployment Workflow -ZenML integrations provide standard pipeline steps for continuous model deployment, ensuring that model configurations are saved in the Artifact Store for future use. +ZenML integrations also include standard pipeline steps for continuous model deployment, managing the deployment workflow and storing Service configurations in the Artifact Store for later use. ================================================== @@ -6015,80 +6076,90 @@ ZenML integrations provide standard pipeline steps for continuous model deployme ### Summary of Deploying Models Locally with BentoML **BentoML Overview** -BentoML is an open-source framework for serving machine learning models, enabling deployment in local, cloud, or Kubernetes environments. The BentoML Model Deployer allows for managing BentoML models on a local HTTP server. +BentoML is an open-source framework for serving machine learning models, enabling deployment locally, in the cloud, or on Kubernetes. The BentoML Model Deployer, part of the ZenML stack, allows for the management of BentoML models on a local HTTP server. + +**Deployment Paths** +1. **Local HTTP Server**: For development and production use. +2. **Containerized Service**: For more complex production settings. -**Deployment Options** -- **Local Development**: Use the Model Deployer for easy local deployment. -- **Production Use**: Transition to production-ready solutions with tools like Yatai or `bentoctl`, though `bentoctl` is deprecated. +**Tools** +- **Yatai**: For deploying Bentos to Kubernetes and cloud platforms. +- **bentoctl**: Deprecated, previously used for cloud deployments. -**When to Use** -- Standardize model deployment within an organization. -- Simplify initial deployment while preparing for production readiness. +**When to Use BentoML Model Deployer** +- To standardize model deployment within an organization. +- For simple model deployment that can evolve into a production-ready solution. -**Getting Started with Deployment** -1. **Install Required Packages**: +**Deployment Steps** +1. **Install BentoML Integration**: ```bash zenml integration install bentoml -y ``` - -2. **Register Model Deployer**: +2. **Register the Model Deployer**: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` + This sets up a local HTTP server to serve models. -3. **Run Local HTTP Server**: The integration provisions a local HTTP server to serve models. - -**Using the Model Deployer** -1. **Create a BentoML Service**: Define how your model will be served. - ```python - import bentoml - from bentoml.validators import DType, Shape - import numpy as np - import torch - - @bentoml.service(name=SERVICE_NAME) - class MNISTService: - def __init__(self): - self.model = bentoml.pytorch.load_model(MODEL_NAME) - self.model.eval() +**Creating a BentoML Service** +Define a BentoML service to serve your model. Example for a PyTorch model: +```python +import bentoml +from bentoml.validators import DType, Shape +import numpy as np +import torch - @bentoml.api() - async def predict_ndarray(self, inp: Annotated[np.ndarray, DType("float32"), Shape((28, 28))]) -> np.ndarray: - inp = np.expand_dims(inp, (0, 1)) - output_tensor = await self.model(torch.tensor(inp)) - return to_numpy(output_tensor) - ``` +@bentoml.service(name="MNISTService") +class MNISTService: + def __init__(self): + self.model = bentoml.pytorch.load_model("MODEL_NAME") + self.model.eval() -2. **Build Your Own Bento**: Use the `bento_builder_step` or create a custom builder. - ```python - from zenml import step + @bentoml.api() + async def predict_ndarray(self, inp: np.ndarray) -> np.ndarray: + inp = np.expand_dims(inp, (0, 1)) + output_tensor = await self.model(torch.tensor(inp)) + return to_numpy(output_tensor) +``` - @step - def my_bento_builder(model) -> bento.Bento: - ... - bentoml.pytorch.save_model(model_name, model) - bento = bentos.build(service=service, models=[model_name], ...) - return bento - ``` +**Building a Bento** +You can build a Bento manually or use the `bento_builder_step`. Example of a custom bento builder: +```python +from zenml import step -3. **Bento Builder Step**: Integrate the built-in step within a ZenML pipeline. - ```python - from zenml import pipeline - from zenml.integrations.bentoml.steps import bento_builder_step +@step +def my_bento_builder(model) -> bento.Bento: + model = load_artifact_from_response(model) + bentoml.pytorch.save_model("model_name", model) + bento = bentos.build(service=service, models=["model_name"]) + return bento +``` - @pipeline - def bento_builder_pipeline(): - bento = bento_builder_step(model=model, ...) - ``` +**Using the Bento Builder Step** +Integrate the built-in bento builder step in a ZenML pipeline: +```python +from zenml import pipeline +from zenml.integrations.bentoml.steps import bento_builder_step -4. **BentoML Deployer Step**: Deploy the bento bundle locally or as a container. - ```python - from zenml.integrations.bentoml.steps import bentoml_model_deployer_step +@pipeline +def bento_builder_pipeline(): + bento = bento_builder_step(model=model, model_name="pytorch_mnist", service="service.py:CLASS_NAME") +``` - @pipeline - def bento_deployer_pipeline(): - deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) - ``` +**Deploying the Bento** +Use the `bentoml_model_deployer_step` to deploy the bento bundle: +- **Local Deployment**: +```python +@pipeline +def bento_deployer_pipeline(): + deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) +``` +- **Containerized Deployment**: +```python +@pipeline +def bento_deployer_pipeline(): + deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", deployment_type="container", image="my-custom-image") +``` **Predicting with Deployed Model** Use the BentoML client to send requests to the deployed model: @@ -6101,117 +6172,172 @@ def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService ``` **From Local to Cloud with `bentoctl`** -`bentoctl` (deprecated) allows deployment to cloud services like AWS Lambda, Google Cloud Run, etc. For more details, refer to the [BentoML documentation](https://docs.bentoml.org). +Though deprecated, `bentoctl` was used for deploying models to cloud environments like AWS, Google Cloud, and Azure. For more details, refer to the [BentoML documentation](https://docs.bentoml.org). -**Conclusion** -BentoML provides a streamlined approach to model deployment, from local testing to production environments, with flexibility for various deployment scenarios. For further details, consult the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-bentoml/#zenml.integrations.bentoml.model_deployers.bentoml_model_deployer). +This summary encapsulates the essential steps and code snippets for deploying models locally using BentoML while maintaining critical technical details. ================================================== === File: docs/book/component-guide/model-deployers/custom.md === -### Summary: Developing a Custom Model Deployer in ZenML +# Custom Model Deployer in ZenML + +ZenML provides a `Model Deployer` component for deploying and managing machine learning models, allowing interaction with various deployment tools, frameworks, or platforms. It serves as a registry for models and supports operations like listing, suspending, resuming, or deleting models. -ZenML provides a `Model Deployer` stack component for deploying and managing machine learning models. This component interacts with various deployment tools and can serve as a model registry, allowing users to list, suspend, resume, or delete deployed models. +## Base Abstraction -#### Key Criteria for Model Deployer: -1. **Efficient Deployment**: Must handle stack-related configurations for remote model serving tools. -2. **Continuous Deployment**: Implements logic to update existing model servers rather than creating new ones for each version (via `deploy_model`). -3. **BaseService Registry**: Acts as a registry for remote model servers, allowing re-creation of `BaseService` instances from persisted configurations. +The model deployer is built on three main criteria: -#### Interface Overview: -The `BaseModelDeployer` class defines essential abstract methods for model deployment and lifecycle management: +1. **Deployment Efficiency**: Manages model deployment according to the serving infrastructure's requirements, holding necessary configuration attributes. +2. **Continuous Deployment**: Implements logic to update existing model servers instead of creating new ones for each model version (via `deploy_model` method). +3. **BaseService Registry**: Acts as a registry for remote model servers, enabling recreation of `BaseService` instances from persisted configurations, such as Kubernetes resource annotations. + +The model deployer also includes lifecycle management methods for remote servers: `stop_model_server`, `start_model_server`, and `delete_model_server`. + +### Interface ```python from abc import ABC, abstractmethod from typing import Dict, Optional, Type from uuid import UUID +from zenml.enums import StackComponentType from zenml.services import BaseService, ServiceConfig from zenml.stack import StackComponent, StackComponentConfig, Flavor +DEFAULT_TIMEOUT = 300 + class BaseModelDeployerConfig(StackComponentConfig): """Base class for model deployer configurations.""" class BaseModelDeployer(StackComponent, ABC): @abstractmethod - def perform_deploy_model(self, id: UUID, config: ServiceConfig) -> BaseService: + def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = DEFAULT_TIMEOUT) -> BaseService: """Deploy a model.""" + @staticmethod @abstractmethod - def perform_stop_model(self, service: BaseService) -> BaseService: + def get_model_server_info(service: BaseService) -> Dict[str, Optional[str]]: + """Extract model server properties.""" + + @abstractmethod + def perform_stop_model(self, service: BaseService, timeout: int = DEFAULT_TIMEOUT, force: bool = False) -> BaseService: """Stop a model server.""" @abstractmethod - def perform_start_model(self, service: BaseService) -> BaseService: + def perform_start_model(self, service: BaseService, timeout: int = DEFAULT_TIMEOUT) -> BaseService: """Start a model server.""" @abstractmethod - def perform_delete_model(self, service: BaseService) -> None: + def perform_delete_model(self, service: BaseService, timeout: int = DEFAULT_TIMEOUT, force: bool = False) -> None: """Delete a model server.""" + +class BaseModelDeployerFlavor(Flavor): + @property + @abstractmethod + def name(self): + """Flavor name.""" + + @property + def type(self) -> StackComponentType: + return StackComponentType.MODEL_DEPLOYER + + @property + def config_class(self) -> Type[BaseModelDeployerConfig]: + return BaseModelDeployerConfig + + @property + @abstractmethod + def implementation_class(self) -> Type[BaseModelDeployer]: + """Implementing class.""" ``` -#### Building Custom Model Deployers: -To create a custom model deployer: +### Building Custom Model Deployers + +To create a custom model deployer flavor: + 1. Inherit from `BaseModelDeployer` and implement the abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. -3. Combine both in a class inheriting from `BaseModelDeployerFlavor`, providing a `name`. +3. Combine both by inheriting from `BaseModelDeployerFlavor`, providing a `name`. 4. Implement a service class inheriting from `BaseService`. -Register the custom flavor using: +Register the flavor via CLI: + ```shell zenml model-deployer flavor register ``` Example registration: + ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` -#### Important Notes: -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- The `CustomModelDeployerFlavor` is used during flavor creation, while `CustomModelDeployerConfig` validates user inputs during registration. -- The actual `CustomModelDeployer` is utilized when the component is in action, allowing separation of configuration and implementation. +### Important Notes -This structure allows for flexible and efficient model deployment management within ZenML workflows. +- The custom flavor is utilized upon creation via CLI. +- The configuration class is used during stack component registration for validation. +- The implementation class is used when the component is in operation, allowing separation of configuration and implementation. + +Ensure ZenML is initialized at the root of your repository for proper flavor resolution. After registration, list available flavors: + +```shell +zenml model-deployer flavor list +``` + +This documentation provides a concise overview of developing custom model deployers in ZenML, emphasizing key technical details and code structure. ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === -### Summary of Deploying Models to Kubernetes with Seldon Core +### Summary: Deploying Models to Kubernetes with Seldon Core -**Overview:** -Seldon Core is a production-grade model serving platform that enables deployment of machine learning models as REST/GRPC microservices. Key features include monitoring, logging, model explainers, outlier detection, and advanced deployment strategies like A/B testing and canary deployments. It simplifies serving models for real-time inference with built-in support for standard ML model packaging formats. +**Seldon Core Overview** +Seldon Core is a production-grade model serving platform that facilitates deploying machine learning models as REST/GRPC microservices. It includes features such as monitoring, logging, model explainers, outlier detectors, and advanced deployment strategies like A/B testing and canary deployments. It supports standard formats for packaging ML models, simplifying real-time inference. -**Important Notes:** -- **MacOS Support:** The Seldon Core model deployer is not supported on MacOS. +**Usage Scenarios** +Use Seldon Core when: +- Deploying on advanced infrastructures like Kubernetes. +- Managing model lifecycle with no downtime. +- Requiring advanced API endpoints (REST/GRPC). +- Needing complex deployment processes with custom transformers and routers. + +For simpler local deployments, consider using the MLflow Model Deployer. -**When to Use Seldon Core:** -- For deploying models on Kubernetes. -- To manage model lifecycle with zero downtime. -- For advanced API endpoints and deployment strategies. -- When needing a customizable deployment process with advanced inference graphs. +**Deployment Steps** +1. **Install Seldon Core Integration**: + ```bash + zenml integration install seldon -y + ``` -**Deployment Prerequisites:** -1. Access to a Kubernetes cluster (recommended to use a Service Connector). -2. Seldon Core must be preinstalled in the target Kubernetes cluster. -3. Models must be stored in persistent shared storage accessible from the Kubernetes cluster. +2. **Prerequisites**: + - Access to a Kubernetes cluster (configured via `kubernetes_context`). + - Seldon Core pre-installed in the cluster. + - Models stored in persistent shared storage accessible from the Kubernetes cluster (e.g., AWS S3, GCS). -**Installation Steps for Seldon Core on EKS:** -1. Configure EKS cluster access: +3. **Configuration Parameters**: + - `kubernetes_context`: Context for contacting the Seldon Core installation. + - `kubernetes_namespace`: Namespace for Seldon Core deployment. + - `base_url`: Base URL for the Kubernetes ingress. + +**Installation Example on EKS**: +1. Configure EKS access: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks ``` + 2. Install Istio: ```bash curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo ``` + 3. Set up Istio gateway: ```bash curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - ``` + 4. Install Seldon Core: ```bash helm install seldon-core seldon-core-operator \ @@ -6220,91 +6346,108 @@ Seldon Core is a production-grade model serving platform that enables deployment --set istio.enabled=true \ --namespace seldon-system ``` + 5. Test installation: ```bash kubectl apply -f iris.yaml ``` -**Service Connector Setup:** -To authenticate to a remote Kubernetes cluster, use Service Connectors for secure access management. Depending on your cloud provider, register the appropriate Service Connector: + Example `iris.yaml`: + ```yaml + apiVersion: machinelearning.seldon.io/v1 + kind: SeldonDeployment + metadata: + name: iris-model + namespace: default + spec: + name: iris + predictors: + - graph: + implementation: SKLEARN_SERVER + modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris + name: classifier + name: default + replicas: 1 + ``` + +6. Extract prediction API URL: + ```bash + export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') + ``` + +7. Send a test prediction request: + ```bash + curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \ + -H 'Content-Type: application/json' \ + -d '{ "data": { "ndarray": [[1,2,3,4]] } }' + ``` + +**Service Connector Setup** +To authenticate to a remote Kubernetes cluster, use Service Connectors for auto-configuration and security. Register a Service Connector: ```bash -zenml service-connector register --type --resource-type kubernetes-cluster --resource-name --auto-configure +zenml service-connector register --type aws --resource-type kubernetes-cluster --resource-name --auto-configure ``` -**Model Deployer Registration:** -Register the Seldon Core Model Deployer: +**Model Deployer Registration**: ```bash zenml model-deployer register --flavor=seldon \ --kubernetes_namespace= \ --base_url=http://$INGRESS_HOST ``` -**Configuration Options:** -Within `SeldonDeploymentConfig`, configure: -- `model_name`: Name of the model. -- `replicas`: Number of replicas. -- `implementation`: Type of Seldon server (e.g., `SKLEARN_SERVER`). -- `parameters`: Optional parameters for deployment. -- `resources`: Resource allocation (CPU and memory). -- `serviceAccount`: Name of the Service Account for deployment. - -**Custom Code Deployment:** -Define a custom prediction function and use `seldon_custom_model_deployer_step` to deploy: -```python -def custom_predict(model, request): - # Custom prediction logic - return predictions +**Managing Authentication** +Ensure the Seldon Core Model Deployer has access to the persistent storage where models are located. Explicit credentials may be necessary if Seldon Core runs in a different cloud or if implicit authentication is not enabled. -seldon_custom_model_deployer_step( - model=model, - predict_function="", - service_config=SeldonDeploymentConfig( - model_name="", - replicas=1, - implementation="custom", - resources=SeldonResourceRequirements( - limits={"cpu": "200m", "memory": "250Mi"} +**Custom Code Deployment** +Define a custom prediction function and use `seldon_custom_model_deployer_step` to deploy it: +```python +@pipeline +def seldon_deployment_pipeline(): + model = ... + seldon_custom_model_deployer_step( + model=model, + predict_function="", + service_config=SeldonDeploymentConfig( + model_name="", + replicas=1, + implementation="custom", + resources=SeldonResourceRequirements( + limits={"cpu": "200m", "memory": "250Mi"} + ), + serviceAccountName="kubernetes-service-account", ), - ), -) + ) ``` -**Advanced Custom Code Deployment:** -For more complex deployments, create a custom class and step. Refer to Seldon Core documentation for details on custom Python models. - -This summary provides a concise overview of deploying models using Seldon Core, including installation, configuration, and deployment strategies, ensuring that critical information is preserved. +This summary captures the essential steps and configurations for deploying models using Seldon Core on Kubernetes, along with examples and key considerations for authentication and custom deployments. ================================================== === File: docs/book/component-guide/model-deployers/mlflow.md === -### Summary: Deploying Models Locally with MLflow +### Summary of MLflow Model Deployer Documentation -**MLflow Model Deployer Overview** -- The MLflow Model Deployer is part of the ZenML integration for deploying and managing MLflow models on a local MLflow server. -- Currently, it is intended for local development and is not production-ready. - -**Use Cases** -- Ideal for local model deployment and real-time predictions without complex infrastructure (e.g., Kubernetes). -- Not suitable for complex deployment scenarios; consider other Model Deployer flavors for such cases. +**Overview:** +The MLflow Model Deployer, part of ZenML's stack components, allows for local deployment and management of MLflow models on a local MLflow server. It is currently intended for development environments and is not yet production-ready. -**Deployment Steps** -1. **Install MLflow Integration:** - ```bash - zenml integration install mlflow -y - ``` - -2. **Register MLflow Model Deployer:** - ```bash - zenml model-deployer register mlflow_deployer --flavor=mlflow - ``` +**When to Use:** +- For easy local model deployment and real-time predictions. +- When a simple deployment setup is preferred over complex environments like Kubernetes. -**Deploying a Logged Model** -- Ensure the model is logged in the MLflow experiment tracker. -- Use the model URI from the artifact path or model registry. +**Installation and Setup:** +To use the MLflow Model Deployer, install the MLflow integration with: +```bash +zenml integration install mlflow -y +``` +Register the model deployer: +```bash +zenml model-deployer register mlflow_deployer --flavor=mlflow +``` +This sets up a local MLflow server to serve the latest model. -**Example Code for Deployment:** -1. **Known Model URI:** +**Deployment Process:** +1. **Deploying a Logged Model:** + Use the model URI from the MLflow experiment tracker: ```python from zenml import step, get_step_context from zenml.client import Client @@ -6315,10 +6458,10 @@ This summary provides a concise overview of deploying models using Seldon Core, model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", - description="Deploying a model using MLflow", + description="An example of deploying a model using the MLflow Model Deployer", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, - model_uri="runs://model", + model_uri="runs://model" or "models://", model_name="model", workers=1, mlserver=False, @@ -6328,7 +6471,8 @@ This summary provides a concise overview of deploying models using Seldon Core, return service ``` -2. **Unknown Model URI:** +2. **Deploying a Model Without Known URI:** + Retrieve the model URI from the current run: ```python from zenml import step, get_step_context from zenml.client import Client @@ -6345,10 +6489,12 @@ This summary provides a concise overview of deploying models using Seldon Core, ) experiment_tracker.configure_mlflow() client = MlflowClient() - model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path="model") + model_uri = artifact_utils.get_artifact_uri( + run_id=mlflow_run_id, artifact_path="model" + ) mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", - description="Deploying a model using MLflow", + description="An example of deploying a model using the MLflow Model Deployer", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri=model_uri, @@ -6361,11 +6507,15 @@ This summary provides a concise overview of deploying models using Seldon Core, return service ``` -**Configuration Options for `MLFlowDeploymentService`:** -- `name`, `description`, `pipeline_name`, `pipeline_step_name`, `model_name`, `model_uri`, `workers`, `mlserver`, `timeout`. +**Configuration Options:** +- `name`, `description`, `pipeline_name`, `pipeline_step_name`: Metadata for the deployment. +- `model_uri`: URI of the model (local path, run ID, or model name/version). +- `workers`: Number of workers for the MLflow server. +- `mlserver`: If True, starts the server as a MLServer instance. +- `timeout`: Time to wait for the server to start/stop. -**Running Inference on Deployed Model** -1. **Load Prediction Service:** +**Running Inference:** +1. **Load a Deployed Service:** ```python import json import requests @@ -6373,13 +6523,13 @@ This summary provides a concise overview of deploying models using Seldon Core, from zenml.integrations.mlflow.services import MLFlowDeploymentService @step(enable_cache=False) - def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, model_name: str = "model") -> None: + def prediction_service_loader(pipeline_name: str, pipeline_step_name: str) -> None: model_deployer = MLFlowModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name) + existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) if not existing_services: raise RuntimeError("No running service found.") service = existing_services[0] - payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}, "params": {"temperature": 0.5, "max_tokens": 20}}) + payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}}) response = requests.post(url=service.get_prediction_url(), data=payload, headers={"Content-Type": "application/json"}) return response.json() ``` @@ -6403,87 +6553,73 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr === File: docs/book/component-guide/container-registries/azure.md === -### Azure Container Registry Overview - -The Azure Container Registry (ACR) is a built-in container registry for ZenML, allowing storage of container images on Azure. +### Azure Container Registry with ZenML -#### When to Use -Utilize ACR if: -- Your stack components require pulling or pushing container images. -- You have access to Azure. +**Overview**: The Azure Container Registry (ACR) is integrated with ZenML for storing container images. It's suitable for users with Azure access who need to pull or push container images. #### Deployment Steps -1. Go to the [Azure portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). -2. Select a subscription, resource group, location, and registry name. -3. Click `Review + Create`. +1. **Create ACR**: + - Go to [Azure Portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). + - Select subscription, resource group, location, and registry name, then click `Review + Create`. -#### Finding the Registry URI -The URI format is: -```shell -.azurecr.io -``` -To find your registry URI: -- Search for `container registries` in the Azure portal. -- Use the registry name to construct the URI. +2. **Find Registry URI**: + - Format: `.azurecr.io` + - Access via Azure Portal: Search for `container registries`, select your registry, and derive the URI. -#### Usage -Prerequisites: -- Docker installed and running. -- Registry URI obtained from the previous section. +#### Usage Requirements +- **Docker**: Must be installed and running. +- **Registry URI**: Obtain from the previous section. -Register the container registry: +#### Registering the Container Registry ```shell zenml container-registry register --flavor=azure --uri= zenml stack update -c ``` #### Authentication Methods -Authentication is required to use ACR: - -**Local Authentication** (quick setup): -- Uses local Docker client authentication. -- Requires Azure CLI installation. -- Log in to the registry: -```shell -az acr login --name= -``` -*Note: Local authentication is not portable across environments.* +1. **Local Authentication**: + - Quick setup using local Docker client credentials. + - Requires Azure CLI installed. + - Login command: + ```shell + az acr login --name= + ``` + - **Note**: Not portable across environments. -**Azure Service Connector** (recommended): -- Provides auto-configuration and security. -- Register a service connector: -```sh -zenml service-connector register --type azure -i -``` -- Non-interactive registration using Service Principal: -```sh -zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id -``` +2. **Azure Service Connector (Recommended)**: + - Provides auto-configuration and better security. + - Register using: + ```sh + zenml service-connector register --type azure -i + ``` + - Non-interactive example: + ```sh + zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id + ``` -#### Connecting ACR to Service Connector -After setting up the service connector, register and connect the ACR: +#### Connecting to ACR +- Register and connect the Azure Container Registry: ```sh zenml container-registry register -f azure --uri= zenml container-registry connect -i ``` -*Non-interactive connection:* +- Non-interactive connection: ```sh zenml container-registry connect --connector ``` -#### Final Steps -To use the Azure Container Registry in a ZenML Stack: +#### Using ACR in ZenML Stack ```sh zenml stack register -c ... --set ``` -For local Docker CLI access to the remote registry: +#### Local Login for Docker CLI +To temporarily authenticate your local Docker client: ```sh zenml service-connector login --resource-type docker-registry --resource-id ``` -### Additional Resources -For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). +For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== @@ -6491,14 +6627,14 @@ For more details on configurable attributes, refer to the [SDK Docs](https://sdk ### GitHub Container Registry Overview -The GitHub Container Registry, integrated with ZenML, is used for storing container images. +The GitHub Container Registry, integrated with ZenML, allows for the storage of container images. It is suitable for projects using GitHub, especially when components need to pull or push images. #### When to Use -- Required when components of your stack need to pull or push container images. -- Ideal for projects hosted on GitHub. +- If your stack components require image interactions. +- If you are using GitHub for your projects. #### Deployment -- Automatically enabled upon creating a GitHub account. +- The registry is enabled by default upon creating a GitHub account. #### Registry URI Format The URI follows this format: @@ -6510,15 +6646,13 @@ ghcr.io/ - `ghcr.io/my-username` - `ghcr.io/my-organization` -To find your registry URI, replace `` with your GitHub username or organization name. - #### Usage Requirements -- **Docker**: Must be installed and running. -- **Registry URI**: Refer to the URI format above. -- **Docker Client Configuration**: Follow [this guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry) to create a personal access token and log in. +1. **Docker**: Must be installed and running. +2. **Registry URI**: Obtainable using the format above. +3. **Docker Client Configuration**: Follow the guide to create a personal access token and authenticate. #### Registering the Container Registry -To register and update your active stack, use: +To register and use the GitHub container registry in your active stack: ```shell zenml container-registry register \ --flavor=github \ @@ -6527,90 +6661,76 @@ zenml container-registry register \ zenml stack update -c ``` -For additional details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). +For further details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/gcp.md === -### Google Cloud Container Registry Overview +### Summary: Storing Container Images in GCP -The Google Cloud Container Registry, integrated with ZenML, utilizes the Google Artifact Registry. **Important:** Google Container Registry is being phased out in favor of Artifact Registry. After May 15, 2024, Artifact Registry will host images for the gcr.io domain, and Container Registry will be shut down by March 18, 2025. +#### Google Cloud Container Registry +- GCP's container registry is integrated with ZenML and utilizes the Google Artifact Registry. +- **Important Notice**: Google Container Registry is being replaced by Artifact Registry. Transition to Artifact Registry is required by May 15, 2024, with shutdown scheduled for March 18, 2025. -### When to Use +#### When to Use Use the GCP container registry if: -- Your stack components require pulling or pushing container images. +- Your stack components need to pull/push container images. - You have access to GCP. -### Deployment Steps -1. Enable Google Artifact Registry [here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). -2. Create a Docker repository [here](https://console.cloud.google.com/artifacts). +#### Deployment Steps +1. **Enable Google Artifact Registry**: [Enable here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). +2. **Create a Docker Repository**: [Create here](https://console.cloud.google.com/artifacts). -### Registry URI Format -The GCP container registry URI format is: +#### Registry URI Format +The URI format is: ```shell -docker.pkg.dev// ``` -**Examples:** -``` +Examples: +```shell europe-west1-docker.pkg.dev/zenml/my-repo southamerica-east1-docker.pkg.dev/zenml/zenml-test ``` -### Usage -To use the GCP container registry: -1. Ensure Docker is installed and running. -2. Register the container registry: +#### Using the GCP Container Registry +Prerequisites: +- Install and run Docker. +- Obtain the registry URI. + +Register the container registry: ```shell zenml container-registry register --flavor=gcp --uri= zenml stack update -c ``` -3. Set up authentication. -### Authentication Methods -Authentication is required to use the GCP Container Registry. Two methods are available: - -#### Local Authentication -- Quick setup using local Docker client credentials. -- Requires GCP CLI installation. -- Configure Docker for Google Container Registry: -```shell -gcloud auth configure-docker -``` -- For Google Artifact Registry: +#### Authentication Methods +Authentication is necessary to use the GCP Container Registry: +- **Local Authentication**: Quick setup using local Docker client credentials. + - Configure Docker for Google Container Registry: + ```shell + gcloud auth configure-docker + ``` + - For Google Artifact Registry: + ```shell + gcloud auth configure-docker -docker.pkg.dev + ``` +- **GCP Service Connector (Recommended)**: Provides better security and management for credentials. Register a connector: ```shell -gcloud auth configure-docker -docker.pkg.dev -``` -**Note:** Local authentication is not portable across environments. - -#### GCP Service Connector (Recommended) -- Provides auto-configuration and security best practices. -- Register a GCP Service Connector: -```sh zenml service-connector register --type gcp -i ``` -- Non-interactive registration: -```sh -zenml service-connector register --type gcp --resource-type docker-registry --auto-configure -``` - -### Connecting GCP Container Registry -To connect the GCP Container Registry to a GCR registry: -```sh +Connect to a GCR registry: +```shell zenml container-registry connect -i ``` -For non-interactive connection: -```sh -zenml container-registry connect --connector -``` -### Final Steps +#### Final Steps To use the GCP Container Registry in a ZenML Stack: -```sh +```shell zenml stack register -c ... --set ``` -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). +For detailed configuration attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== @@ -6618,51 +6738,50 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_c ### DockerHub Container Registry in ZenML -**Overview**: DockerHub is a built-in container registry in ZenML for storing container images. +**Overview**: The DockerHub container registry is integrated with ZenML for storing container images. **When to Use**: -- If components of your stack need to pull/push images. -- You have a DockerHub account. +- If your stack components need to pull/push container images. +- If you have a DockerHub account. **Deployment**: 1. Create a DockerHub account. -2. Images are published in a **public** repository by default. For a **private** repository, create one on DockerHub before running the pipeline. +2. By default, images are published in a **public** repository. For a **private** repository, create one on DockerHub before running the pipeline. 3. The repository name depends on the orchestrator or step operator used in your stack. **Registry URI Format**: -The DockerHub registry URI can be: +The DockerHub registry URI can be in one of these formats: ```shell # or docker.io/ ``` **Examples**: -- zenml -- my-username -- docker.io/zenml -- docker.io/my-username +- `zenml` +- `my-username` +- `docker.io/zenml` +- `docker.io/my-username` -**Finding the URI**: -- Use your DockerHub account name to construct the URI. +**Finding the Registry URI**: +- Use your DockerHub account name to construct the URI using the format `docker.io/`. **Usage**: 1. Ensure Docker is installed and running. -2. Register the container registry: +2. Register the container registry in your active stack: ```shell zenml container-registry register \ --flavor=dockerhub \ --uri= -# Update the active stack zenml stack update -c ``` -3. Log in to DockerHub: +3. Log in to DockerHub for image operations: ```shell docker login ``` -Use your DockerHub account name and password or a personal access token. +You will need your DockerHub account name and either your password or a personal access token. -**Additional Information**: For configurable attributes of the DockerHub container registry, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). +For detailed configuration options, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== @@ -6670,29 +6789,29 @@ Use your DockerHub account name and password or a personal access token. ### Container Registries -Container registries are crucial for storing Docker images used in machine learning pipelines within remote MLOps stacks. They enable the containerization of pipeline code, ensuring a portable and isolated execution environment. +Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipeline code for isolated execution. #### When to Use -A container registry is necessary when components of your stack need to push or pull container images. This applies to most of ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation for specific components to determine if a container registry is required. +A container registry is necessary when components of your stack need to push or pull container images. This applies to most of ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation of the specific component to determine if a container registry is required. #### Container Registry Flavors ZenML supports several container registry flavors: -- **Default Flavor**: Accepts any URI without validation, suitable for local or unsupported remote registries. -- **Specific Flavors**: Validates the URI and performs checks to ensure push capabilities. +- **Default Flavor**: Accepts any URI without validation; suitable for local or unsupported remote registries. +- **Specific Flavors**: Validates URIs and ensures push capability. -**Recommendation**: Use specific container registry flavors for additional URI validation. +**Recommendation**: Use specific container registry flavors for additional URI validations. -| Container Registry | Flavor | Integration | URI Example | -|--------------------|---------|-------------|-------------------------------------------| -| DefaultContainerRegistry | `default` | _built-in_ | - | -| DockerHubContainerRegistry | `dockerhub` | _built-in_ | docker.io/zenml | -| GCPContainerRegistry | `gcp` | _built-in_ | gcr.io/zenml | -| AzureContainerRegistry | `azure` | _built-in_ | zenml.azurecr.io | -| GitHubContainerRegistry | `github` | _built-in_ | ghcr.io/zenml | -| AWSContainerRegistry | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | +| Container Registry | Flavor | Integration | URI Example | +|--------------------|---------|-------------|-----------------------------------------| +| DefaultContainerRegistry | `default` | _built-in_ | - | +| DockerHubContainerRegistry | `dockerhub` | _built-in_ | docker.io/zenml | +| GCPContainerRegistry | `gcp` | _built-in_ | gcr.io/zenml | +| AzureContainerRegistry | `azure` | _built-in_ | zenml.azurecr.io | +| GitHubContainerRegistry | `github` | _built-in_ | ghcr.io/zenml | +| AWSContainerRegistry | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use the command: @@ -6704,23 +6823,19 @@ zenml container-registry flavor list === File: docs/book/component-guide/container-registries/aws.md === -### Amazon Elastic Container Registry (ECR) Overview +### Summary: Storing Container Images in Amazon ECR -Amazon ECR is a container registry integrated with ZenML's AWS support, allowing storage of container images. - -#### When to Use -- If components of your stack require pulling or pushing container images. -- If you have access to AWS ECR. +**Amazon Elastic Container Registry (ECR)** is integrated with ZenML for storing container images. Use it when your stack components require pulling or pushing images and you have AWS ECR access. #### Deployment Steps -1. **Create an AWS Account**: ECR is activated automatically. +1. **Create an AWS Account**: ECR is activated upon account creation. 2. **Create a Repository**: - Visit the [ECR website](https://console.aws.amazon.com/ecr). - - Select the correct region. - - Click on `Create repository` and choose a private repository. + - Select the region. + - Click on `Create repository` and create a private repository. #### URI Format -The URI format for AWS ECR is: +The ECR URI format is: ``` .dkr.ecr..amazonaws.com ``` @@ -6730,80 +6845,74 @@ Example URIs: ``` To find your URI: - Get your `Account ID` from the AWS console. -- Select a region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). - -#### Usage Requirements -- Install ZenML AWS integration: - ```shell - zenml integration install aws - ``` -- Ensure Docker is installed and running. -- Obtain the registry URI. +- Choose the region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). -#### Registering the Container Registry -To register and update the active stack: -```shell -zenml container-registry register --flavor=aws --uri= -zenml stack update -c -``` +#### Using AWS Container Registry +1. **Install ZenML AWS Integration**: + ```shell + zenml integration install aws + ``` +2. **Install Docker**. +3. **Register the Container Registry**: + ```shell + zenml container-registry register --flavor=aws --uri= + zenml stack update -c + ``` #### Authentication Methods -Authentication is necessary to use AWS ECR: - -1. **Local Authentication** (quick setup): - - Requires AWS CLI installed and configured. - - Log in to the container registry: - ```shell - aws ecr get-login-password --region | docker login --username AWS --password-stdin - ``` - - Note: This method is not portable across environments. - -2. **AWS Service Connector** (recommended): - - Provides auto-configuration and better security. - - Register a service connector: - ```sh - zenml service-connector register --type aws -i - ``` - - Non-interactive registration: - ```sh - zenml service-connector register --type aws --resource-type docker-registry --auto-configure - ``` +- **Local Authentication**: Quick setup using local AWS CLI credentials. + ```shell + aws ecr get-login-password --region | docker login --username AWS --password-stdin + ``` +- **AWS Service Connector** (recommended): Provides better security and management. + ```sh + zenml service-connector register --type aws -i + ``` + Non-interactive version: + ```sh + zenml service-connector register --type aws --resource-type docker-registry --auto-configure + ``` -#### Connecting the Container Registry -To connect the AWS container registry to the ECR: +#### Connecting AWS Container Registry +To connect the container registry to an ECR registry: ```sh zenml container-registry connect -i -# or non-interactive +``` +Non-interactive: +```sh zenml container-registry connect --connector ``` -#### Using the Container Registry in a ZenML Stack -To register and set a stack: +#### Final Steps +Register a stack with the new container registry: ```sh zenml stack register -c ... --set ``` - -#### Local Login for Docker CLI -To temporarily authenticate your local Docker client: +For local Docker client access to the remote registry: ```sh zenml service-connector login --resource-type docker-registry ``` -#### Additional Resources -For more details on configurable attributes of the AWS container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). +For detailed attributes of the AWS container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === -### Develop a Custom Container Registry +### Developing a Custom Container Registry in ZenML #### Overview -To create a custom container registry in ZenML, first familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +This documentation outlines how to create a custom container registry in ZenML, emphasizing the base abstractions and implementation steps. #### Base Abstraction -ZenML's container registries have a basic abstraction with a configuration containing a `uri` and a non-abstract `prepare_image_push` method for validation. +ZenML's container registries have a simple base structure, requiring only a `uri`. The `BaseContainerRegistry` class includes a non-abstract `prepare_image_push` method for validation. +**Key Classes:** +- **BaseContainerRegistryConfig**: Holds the configuration with a `uri`. +- **BaseContainerRegistry**: Implements methods for preparing and pushing images. +- **BaseContainerRegistryFlavor**: Defines the flavor structure, including properties for name, type, and associated classes. + +**Code Snippet:** ```python from abc import abstractmethod from typing import Type @@ -6844,52 +6953,50 @@ class BaseContainerRegistryFlavor(Flavor): return BaseContainerRegistry ``` -#### Steps to Build a Custom Container Registry -1. **Create a Class**: Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push checks. -2. **Configuration Class**: Inherit from `BaseContainerRegistryConfig` for additional configuration. -3. **Flavor Class**: Inherit from `BaseContainerRegistryFlavor` to combine implementation and configuration. +#### Building Your Own Container Registry +To create a custom flavor: +1. Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push validations. +2. Create a configuration class inheriting from `BaseContainerRegistryConfig`. +3. Combine both by inheriting from `BaseContainerRegistryFlavor`. -**Register the Flavor**: +**Registering the Flavor:** +Use the CLI to register your flavor: ```shell zenml container-registry flavor register ``` - For example: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` #### Important Notes -- Initialize ZenML at the root of your repository to ensure proper resolution of the flavor class. -- List available flavors with: +- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. +- After registration, list available flavors with: ```shell zenml container-registry flavor list ``` #### Workflow Integration -- **CustomContainerRegistryFlavor** is used during flavor creation. -- **CustomContainerRegistryConfig** validates values during registration. -- **CustomContainerRegistry** is utilized when the component is in use, allowing separation of configuration from implementation. +- **CustomContainerRegistryFlavor**: Used during flavor creation. +- **CustomContainerRegistryConfig**: Validates user inputs during stack component registration. +- **CustomContainerRegistry**: Engaged when the component is in use, allowing separation of configuration and implementation. -This design enables registration of flavors and components without needing all dependencies installed locally. +This structure supports registering flavors even if their dependencies are not installed locally. ================================================== === File: docs/book/component-guide/container-registries/default.md === -### Default Container Registry Overview +### Summary: Storing Container Images Locally with ZenML + +**Default Container Registry**: ZenML provides a built-in Default container registry that supports various URI formats for local and remote registries not covered by other flavors. -The Default Container Registry in ZenML is a built-in option that supports local and certain remote container registries. It is ideal for local setups or remote registries not covered by other flavors. +#### When to Use +- Use for a **local container registry** or unsupported remote registries. #### Local Registry URI Format -For a local container registry, use the following URI format: -```shell -localhost: -# Examples: -localhost:5000 -localhost:8000 -localhost:9999 -``` +- Format: `localhost:` + - Examples: `localhost:5000`, `localhost:8000`, `localhost:9999` #### Usage Steps 1. Ensure **Docker** is installed and running. @@ -6903,41 +7010,45 @@ localhost:9999 ``` #### Authentication Methods -- **Local Authentication**: Quick setup using local Docker client credentials. Log in with: +- **Private Registries**: Configure authentication; for local setups, use Local Authentication. +- **Local Authentication**: Leverages Docker client credentials: ```shell docker login --username --password-stdin ``` - *Note: This method is not portable across environments.* + *Note: Not portable across environments; use Docker Service Connector for portability.* -- **Docker Service Connector (Recommended)**: Use for private registries. Register with: - ```shell +- **Docker Service Connector**: Recommended for accessing private registries. Register via: + ```sh zenml service-connector register --type docker -i ``` - Or non-interactively: - ```shell + Non-interactive: + ```sh zenml service-connector register --type docker --username= --password= ``` -#### Connecting to a Registry -After setting up a Docker Service Connector, register the container registry: -```shell -zenml container-registry register -f default --uri= -zenml container-registry connect -i -``` -For non-interactive connection: -```shell -zenml container-registry connect --connector -``` +#### Connecting to a Container Registry +1. Register the container registry: + ```sh + zenml container-registry register -f default --uri= + ``` +2. Connect via Docker Service Connector: + ```sh + zenml container-registry connect -i + ``` + Non-interactive: + ```sh + zenml container-registry connect --connector + ``` -#### Using the Registry in a ZenML Stack -To register and set a stack with the new container registry: -```shell -zenml stack register -c ... --set -``` +#### Final Steps +- Use the Default Container Registry in a ZenML Stack: + ```sh + zenml stack register -c ... --set + ``` #### Local Client Authentication -If you need to interact with the remote registry via Docker CLI, temporarily authenticate using: -```shell +To temporarily authenticate your local Docker client: +```sh zenml service-connector login ``` @@ -6949,67 +7060,64 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr ### Local Image Builder Overview -The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your machine to create container images. It employs the official Docker Python library for building and pushing images, which retrieves authentication credentials from `$HOME/.docker/config.json`. To use a different configuration directory, set the `DOCKER_CONFIG` environment variable: +The Local Image Builder in ZenML utilizes the local Docker installation on your machine to build container images. It employs the official Docker Python library, which accesses authentication credentials from `$HOME/.docker/config.json`. To specify a different configuration directory, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` + Ensure the specified directory contains a `config.json` file. ### When to Use Use the Local Image Builder if: -- You can install and run Docker on your client machine. -- You want to use remote components requiring containerization without additional infrastructure setup. +- You can install and use Docker on your client machine. +- You want to utilize remote components requiring containerization without additional infrastructure setup. ### Deployment and Usage -The Local Image Builder is included with ZenML and requires no extra setup. To use it, ensure: +The Local Image Builder is built into ZenML and requires no extra setup. To use it, ensure: - Docker is installed and running. -- The Docker client is authenticated to push to your desired container registry. +- The Docker client is authenticated to push to your chosen container registry. -To register the image builder and create a new stack, use the following commands: +To register the image builder and create a new stack, use: ```shell zenml image-builder register --flavor=local zenml stack register -i ... --set ``` -For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). +For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/gcp.md === -### Google Cloud Image Builder Overview +### Google Cloud Image Builder with ZenML -The Google Cloud Image Builder is a component of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) for building container images. +The Google Cloud Image Builder is a component of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) for building container images. #### When to Use -Utilize the Google Cloud Image Builder if: -- You cannot install or use [Docker](https://www.docker.com) locally. -- You are already using Google Cloud Platform (GCP). -- Your stack primarily consists of GCP components like [GCS Artifact Store](../artifact-stores/gcp.md) or [Vertex Orchestrator](../orchestrators/vertex.md). +- If you cannot install or use [Docker](https://www.docker.com) locally. +- If you are already using Google Cloud Platform (GCP). +- If your stack includes other GCP components (e.g., [GCS Artifact Store](../artifact-stores/gcp.md), [Vertex Orchestrator](../orchestrators/vertex.md)). -#### Deployment Steps -1. **Enable Google Cloud Build APIs** on your GCP project. -2. **Install ZenML GCP Integration**: +#### Deployment Requirements +1. Enable Google Cloud Build APIs in your GCP project. +2. Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` -3. **Set Up Required Resources**: +3. Set up: - A [GCP Artifact Store](../artifact-stores/gcp.md) for build context. - A [GCP container registry](../container-registries/gcp.md) for the built image. - - Optionally, specify GCP project ID and service account credentials. + - Optionally, specify a GCP project ID and service account with necessary permissions. #### Configuration Options -You can customize: -- Docker image for build steps (default: `'gcr.io/cloud-builders/docker'`). -- Network for the build container. -- Build timeout settings. +- Change the Docker image used for building (default: `'gcr.io/cloud-builders/docker'`). +- Specify the Docker network and build timeout. #### Registering the Image Builder -To register and use the image builder: ```shell zenml image-builder register \ --flavor=gcp \ @@ -7021,47 +7129,44 @@ zenml stack register -i ... --set ``` #### Authentication Methods -Authentication is required to access GCP services: -- **Local Authentication**: Quick setup using local Google Cloud CLI credentials. -- **GCP Service Connector** (recommended): Provides auto-configuration and better security practices. Register with: - ```shell - zenml service-connector register --type gcp -i - ``` - -#### Connecting the Image Builder -After setting up authentication, connect the image builder: -```shell -zenml image-builder connect -i -``` -For a non-interactive version: -```shell -zenml image-builder connect --connector -``` +1. **Local Authentication**: Quick setup using local GCP CLI credentials. + - Requires Google Cloud CLI installation. + - Not portable across environments. -#### Using GCP Credentials -Alternatively, use a GCP Service Account Key: -```shell -zenml image-builder register \ - --flavor=gcp \ - --project= \ - --service_account_path= \ - --cloud_builder_image= \ - --network= \ - --build_timeout= +2. **GCP Service Connector (Recommended)**: + - Provides auto-configuration and better security. + - Register using: + ```sh + zenml service-connector register --type gcp -i + ``` + - For auto-configuration: + ```sh + zenml service-connector register --type gcp --resource-type gcp-generic --resource-name --auto-configure + ``` -zenml stack register -i ... --set -``` +3. **GCP Credentials**: + - Generate a GCP Service Account Key and reference it in the Image Builder configuration. + - Example registration: + ```shell + zenml image-builder register \ + --flavor=gcp \ + --project= \ + --service_account_path= \ + --cloud_builder_image= \ + --network= \ + --build_timeout= + ``` #### Caveats -- Google Cloud Build uses a default network (`cloudbuild`) for builds, which allows access to GCP services. -- To install private dependencies from GCP Artifact Registry, use a custom base image with `keyrings.google-artifactregistry-auth`: - ```dockerfile - FROM zenmldocker/zenml:latest - RUN pip install keyrings.google-artifactregistry-auth - ``` -- Specify the ZenML version in the base image tag for better version control. +- Google Cloud Build uses a `cloudbuild` network for builds, allowing access to GCP services with Application Default Credentials (ADC). +- For private dependencies in GCP Artifact Registry, use a custom base image with `keyrings.google-artifactregistry-auth`: + ```dockerfile + FROM zenmldocker/zenml:latest + RUN pip install keyrings.google-artifactregistry-auth + ``` +- Specify the ZenML version in the base image tag for consistency. -This summary provides essential details for deploying and using the Google Cloud Image Builder within ZenML, including setup, authentication, and configuration options. +This summary provides essential details for using the Google Cloud Image Builder with ZenML, including setup, registration, authentication, and caveats. ================================================== @@ -7069,25 +7174,26 @@ This summary provides essential details for deploying and using the Google Cloud ### Kaniko Image Builder Overview -The Kaniko image builder is part of the ZenML `kaniko` integration, utilizing [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images. +The Kaniko image builder, part of ZenML's `kaniko` integration, utilizes [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images. It is ideal for users who cannot install Docker locally and are familiar with Kubernetes. -#### When to Use Kaniko -- If you cannot install or use [Docker](https://www.docker.com) on your client machine. -- If you are familiar with or already using Kubernetes. +### Prerequisites -#### Deployment Requirements -- A deployed Kubernetes cluster. -- ZenML `kaniko` integration installed: - ```shell - zenml integration install kaniko - ``` -- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) installed. -- A remote container registry as part of your stack. -- Optionally, configure the build context storage in the artifact store by setting `store_context_in_artifact_store=True`. -- Optionally, adjust the pod running timeout with `pod_running_timeout`. +1. **Kubernetes Cluster**: A deployed Kubernetes cluster is required. +2. **ZenML Integration**: Install the Kaniko integration: + ```shell + zenml integration install kaniko + ``` +3. **kubectl**: Must be installed for Kubernetes management. +4. **Container Registry**: A remote container registry must be part of your stack. -#### Registering the Image Builder -To register and use the Kaniko image builder in your active stack: +### Configuration + +- **Build Context**: By default, Kaniko uses the Kubernetes API to transfer the build context. To store it in an artifact store, set `store_context_in_artifact_store=True` and ensure a remote artifact store is configured. +- **Pod Timeout**: Optionally adjust the timeout for the Kaniko pod using `pod_running_timeout`. + +### Registering the Image Builder + +To register the Kaniko image builder: ```shell zenml image-builder register \ --flavor=kaniko \ @@ -7097,51 +7203,53 @@ zenml image-builder register \ zenml stack register -i ... --set ``` -#### Authentication for Container Registry and Artifact Store -The Kaniko build pod requires authentication to: +### Authentication + +The Kaniko build pod must authenticate to: - Push to the container registry. -- Pull from a private parent image registry. +- Pull from private registries for parent images. - Read from the artifact store if configured. -**Setup Instructions by Cloud Provider:** +#### Cloud Provider Configurations -- **AWS:** - - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to EKS node IAM role. - - Register the image builder with necessary environment variables: - ```shell - zenml image-builder register \ - --flavor=kaniko \ - --kubernetes_context= \ - --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' - ``` +1. **AWS**: + - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to the EKS node IAM role. + - Register the image builder with required environment variables: + ```shell + zenml image-builder register \ + --flavor=kaniko \ + --kubernetes_context= \ + --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' + ``` -- **GCP:** - - Enable workload identity and create necessary service accounts. - - Grant permissions and register the image builder: - ```shell - zenml image-builder register \ - --flavor=kaniko \ - --kubernetes_context= \ - --kubernetes_namespace= \ - --service_account_name= - ``` +2. **GCP**: + - Enable workload identity and configure service accounts. + - Register the image builder with the correct namespace and service account: + ```shell + zenml image-builder register \ + --flavor=kaniko \ + --kubernetes_context= \ + --kubernetes_namespace= \ + --service_account_name= + ``` -- **Azure:** - - Create a Kubernetes `configmap` for Docker config: - ```shell - kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' - ``` - - Register the image builder with the mounted configmap: - ```shell - zenml image-builder register \ - --flavor=kaniko \ - --kubernetes_context= \ - --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ - --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' - ``` +3. **Azure**: + - Create a Kubernetes `configmap` for Docker config: + ```shell + kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' + ``` + - Register the image builder to mount the configmap: + ```shell + zenml image-builder register \ + --flavor=kaniko \ + --kubernetes_context= \ + --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ + --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' + ``` -#### Additional Parameters for Kaniko Build -You can pass additional parameters using the `executor_args` attribute: +### Additional Parameters + +You can pass additional parameters to the Kaniko build using `executor_args`: ```shell zenml image-builder register \ --flavor=kaniko \ @@ -7149,119 +7257,150 @@ zenml image-builder register \ --executor_args='["--label", "key=value"]' ``` -**Common Flags:** -- `--cache`: Disable caching (`false`). -- `--cache-dir`: Directory for cached layers (`/cache`). -- `--cache-repo`: Repository for cached layers (`gcr.io/kaniko-project/executor`). -- `--cache-ttl`: Cache expiration time (`24h`). -- `--cleanup`: Disable cleanup of the working directory (`false`). -- `--compressed-caching`: Disable compressed caching (`false`). +### Common Flags + +- `--cache`: Disable caching (default: true). +- `--cache-dir`: Directory for cached layers (default: `/cache`). +- `--cache-repo`: Repository for cached layers (default: `gcr.io/kaniko-project/executor`). +- `--cache-ttl`: Cache expiration time (default: `24h`). +- `--cleanup`: Disable cleanup of the working directory (default: true). +- `--compressed-caching`: Disable compressed caching (default: true). -For more details, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). +For a full list of flags, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). + +For more details, consult the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kaniko/#zenml.integrations.kaniko.image_builders.kaniko_image_builder.KanikoImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/aws.md === -### AWS Image Builder with ZenML +# AWS Image Builder with ZenML -**Overview**: The AWS Image Builder, part of the ZenML `aws` integration, utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) to create container images. +The AWS Image Builder is a component of the ZenML `aws` integration that utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) for building container images. -#### When to Use -- If Docker cannot be installed on your client machine. -- If you are already using AWS services. -- If your stack includes AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or [SageMaker Orchestrator](../orchestrators/sagemaker.md). +## When to Use +Use the AWS Image Builder if: +- You cannot install or use [Docker](https://www.docker.com) locally. +- You are already using AWS. +- Your stack includes AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or [SageMaker Orchestrator](../orchestrators/sagemaker.md). -#### Deployment Options -- For a quick setup, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). +## Deployment +For a quick setup, consider using the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). -#### Usage Requirements -1. Install the ZenML `aws` integration: +## Usage Requirements +To use the AWS Image Builder, ensure you have: +1. ZenML `aws` integration installed: ```shell zenml integration install aws ``` -2. Set up an [S3 Artifact Store](../artifact-stores/s3.md). -3. Optionally, create an [AWS container registry](../container-registries/aws.md). -4. Create an [AWS CodeBuild project](https://aws.amazon.com/codebuild) in the desired AWS region. Key configurations include: - - **Source Type**: `Amazon S3` - - **Bucket**: Same as the S3 Artifact Store - - **Environment Type**: `Linux Container` - - **Environment Image**: `bentolor/docker-dind-awscli` - - **Privileged Mode**: `false` +2. An [S3 Artifact Store](../artifact-stores/s3.md) for build context. +3. An optional [AWS container registry](../container-registries/aws.md) for pushing built images. +4. An [AWS CodeBuild project](https://aws.amazon.com/codebuild) set up in the appropriate region. + +### CodeBuild Project Configuration +Basic configuration values include: +- **Source Type**: `Amazon S3` +- **Bucket**: Same as the S3 Artifact Store. +- **Environment Image**: `bentolor/docker-dind-awscli` +- **Privileged Mode**: `false` + +Ensure the **Service Role** for CodeBuild has permissions for S3 and ECR (if applicable): +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "ecr:BatchGetImage", + "ecr:PutImage" + ], + "Resource": "*" + } + ] +} +``` -5. Ensure the **Service Role** for CodeBuild has permissions for S3 and ECR (if applicable): - ```json - { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": ["s3:GetObject"], - "Resource": "arn:aws:s3:::/*" - }, - { - "Effect": "Allow", - "Action": ["ecr:*"], - "Resource": "arn:aws:ecr:::repository/" - }, - { - "Effect": "Allow", - "Action": ["ecr:GetAuthorizationToken"], - "Resource": "*" - } - ] - } - ``` +### Registering the Image Builder +To register the image builder: +```shell +zenml image-builder register \ + --flavor=aws \ + --code_build_project= -6. Optionally, register an [AWS Service Connector](../../how-to/infrastructure-deployment/auth-management/aws-service-connector.md) for build triggering. +zenml stack register -i ... --set +``` -#### Authentication Methods -- **Local Authentication**: Quick setup using local AWS CLI credentials (not portable). -- **AWS Service Connector (recommended)**: Use for better security and multi-component access. Register with: - ```shell - zenml service-connector register --type aws -i - ``` +## Authentication Methods +Authentication is required to integrate the AWS Image Builder. Options include: -#### Registering the Image Builder -To register the AWS Image Builder: +### Implicit Authentication +Uses local AWS CLI credentials. Quick but not portable across environments. + +### AWS Service Connector (Recommended) +For better security and management, register an AWS Service Connector: ```shell -zenml image-builder register \ - --flavor=aws \ - --code_build_project= \ - --connector +zenml service-connector register --type aws -i +``` +Or auto-configure: +```shell +zenml service-connector register --type aws --resource-type aws-generic --auto-configure +``` + +Ensure the connector has permissions for CodeBuild: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "codebuild:StartBuild", + "codebuild:BatchGetBuilds" + ], + "Resource": "arn:aws:codebuild:::project/" + } + ] +} ``` -To connect an existing Image Builder to a Service Connector: +After setting up the connector, register the image builder: ```shell -zenml image-builder connect --connector +zenml image-builder register \ + --flavor=aws \ + --code_build_project= \ + --connector ``` -#### Customizing AWS CodeBuild Builds -You can customize builds by setting additional attributes during registration: +## Customizing AWS CodeBuild Builds +You can customize the image builder with: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables. -- `implicit_container_registry_auth`: Use implicit (default) or explicit authentication for container registry access. +- `implicit_container_registry_auth`: Controls authentication method for the container registry. -#### Final Steps +For best practices, consider copying the default Docker image to your own registry to avoid rate limits. + +## Final Steps Use the AWS Image Builder in a ZenML Stack: ```shell zenml stack register -i ... --set ``` -This summary captures the essential details for using AWS Image Builder with ZenML, including setup, configuration, and customization options. +This summary provides essential details for utilizing the AWS Image Builder with ZenML, including setup, registration, authentication, and customization options. ================================================== === File: docs/book/component-guide/image-builders/custom.md === -### Develop a Custom Image Builder +# Custom Image Builder Development in ZenML -#### Overview -To create a custom image builder in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +## Overview +This documentation provides guidance on developing a custom image builder in ZenML, focusing on the `BaseImageBuilder` abstract class, which serves as the foundation for creating Docker image builders. -#### Base Abstraction -The `BaseImageBuilder` is an abstract class for building Docker images. It provides a basic interface: +### Base Abstraction +The `BaseImageBuilder` class must be subclassed to create custom image builders. It provides a basic interface for building Docker images. ```python from abc import ABC, abstractmethod @@ -7272,27 +7411,22 @@ from zenml.image_builders import BuildContext from zenml.stack import StackComponent class BaseImageBuilder(StackComponent, ABC): + """Base class for ZenML image builders.""" + @property def build_context_class(self) -> Type["BuildContext"]: + """Returns the build context class.""" return BuildContext @abstractmethod - def build( - self, - image_name: str, - build_context: "BuildContext", - docker_build_options: Dict[str, Any], - container_registry: Optional["BaseContainerRegistry"] = None, - ) -> str: + def build(self, image_name: str, build_context: "BuildContext", docker_build_options: Dict[str, Any], container_registry: Optional["BaseContainerRegistry"] = None) -> str: """Builds a Docker image and optionally pushes it to a registry.""" ``` -#### Steps to Create a Custom Image Builder -1. **Subclass `BaseImageBuilder`:** Implement the abstract `build` method to create a Docker image. -2. **Create Configuration Class:** Inherit from `BaseImageBuilderConfig` for any configuration parameters. -3. **Combine Implementation and Configuration:** Inherit from `BaseImageBuilderFlavor` and define a `name` for the flavor. - -To register the flavor, use the CLI: +### Steps to Create a Custom Image Builder +1. **Subclass `BaseImageBuilder`:** Implement the `build` method to define how the Docker image is built. +2. **Configuration Class:** Create a class inheriting from `BaseImageBuilderConfig` to add configuration parameters. +3. **Flavor Registration:** Inherit from `BaseImageBuilderFlavor`, providing a `name` for the flavor. Register it via CLI: ```shell zenml image-builder flavor register @@ -7304,67 +7438,50 @@ Example registration: zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` -**Note:** Initialize ZenML at the root of your repository to avoid resolution issues. - -#### Listing Available Flavors -To see your registered flavor: +### Important Considerations +- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. +- After registration, list available flavors: ```shell zenml image-builder flavor list ``` -#### Important Considerations -- The **CustomImageBuilderFlavor** is used during flavor creation. -- The **CustomImageBuilderConfig** validates user input during registration. -- The **CustomImageBuilder** is utilized when the component is in use. - -This design separates flavor configuration from implementation, allowing for registration even if dependencies are not installed locally. - -#### Custom Build Context -If a different build context is needed, subclass `BuildContext` and override the `build_context_class` property in your image builder: - -```python -class MyCustomBuildContext(BuildContext): - # Custom context implementation +### Workflow Integration +- **Flavor Class:** Used during flavor creation via CLI. +- **Config Class:** Validates user input during stack component registration. +- **Image Builder Class:** Engaged when the component is in use, allowing separation of flavor configuration from implementation. -class MyImageBuilder(BaseImageBuilder): - @property - def build_context_class(self) -> Type["MyCustomBuildContext"]: - return MyCustomBuildContext -``` +### Custom Build Context +If a different build context is needed, subclass `BuildContext` and override the `build_context_class` property in your image builder. -This allows for tailored build contexts beyond the default Docker context. +This documentation provides a concise guide to creating and integrating custom image builders in ZenML, ensuring that critical technical details are preserved for effective implementation. ================================================== === File: docs/book/component-guide/image-builders/image-builders.md === -# Image Builders in ZenML +### Image Builders in ZenML -## Overview -The image builder is crucial for building container images in remote MLOps environments, enabling the execution of machine-learning pipelines. +**Overview**: The image builder is crucial for building container images in remote MLOps stacks, enabling the execution of machine-learning pipelines in various environments. -## When to Use -Use the image builder when components of your stack need to create container images, particularly for ZenML's remote orchestrators, step operators, and model deployers. +**When to Use**: The image builder is necessary when components of your stack require container images, particularly for ZenML's remote orchestrators, step operators, and some model deployers. -## Image Builder Flavors -ZenML includes a `local` image builder by default, with additional options available through integrations: +**Image Builder Flavors**: ZenML provides several image builder options: -| Image Builder | Flavor | Integration | Notes | -|-----------------------|----------|-------------|---------------------------------| -| [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | -| [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | -| [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build. | -| [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build. | -| [Custom Implementation](custom.md) | _custom_ | | Create your own image builder. | +| Image Builder | Flavor | Integration | Notes | +|-----------------------|----------|-------------|-----------------------------------------| +| [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | +| [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | +| [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build for images. | +| [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build for images. | +| [Custom Implementation](custom.md) | _custom_ | | Allows custom image builder implementations. | To view available image builder flavors, use: ```shell zenml image-builder flavor list ``` -## Usage -Direct interaction with the image builder is unnecessary. The active ZenML stack automatically utilizes the appropriate image builder for any component requiring container image creation. +**Usage**: You do not need to interact directly with the image builder in your code. As long as the desired image builder is part of your active ZenML stack, it will be automatically utilized by any component that requires container image building. ================================================== @@ -7373,47 +7490,56 @@ Direct interaction with the image builder is unnecessary. The active ZenML stack # Weights & Biases Integration with ZenML ## Overview -The Weights & Biases (W&B) Experiment Tracker is integrated with ZenML to log and visualize pipeline information such as models, parameters, and metrics. It is particularly useful for tracking experiments during the ML development phase and can also be used in production workflows. +The Weights & Biases (W&B) Experiment Tracker is a ZenML integration that allows logging and visualizing pipeline information (models, parameters, metrics) using the W&B platform. It is ideal for iterative ML experimentation and can also be used for automated pipeline runs. ## When to Use -- If you are already using W&B for experiment tracking and want to integrate it with ZenML. -- For visually navigating results from ZenML pipeline runs. -- To share logged artifacts and metrics with teams or stakeholders. +Use the W&B Experiment Tracker if: +- You are already using W&B for tracking and want to integrate it into your ZenML MLOps workflows. +- You prefer a visually interactive way to navigate results from ZenML pipelines. +- You want to share logged artifacts and metrics with your team or stakeholders. -Consider other experiment trackers if you are unfamiliar with W&B. +Consider other Experiment Tracker flavors if you are unfamiliar with W&B. ## Deployment -To use the W&B Experiment Tracker, install the integration: +To deploy the W&B Experiment Tracker, install the integration: ```shell zenml integration install wandb -y ``` -### Authentication +### Authentication Methods Configure the following credentials for W&B: - `api_key`: Required API key for your W&B account. -- `project_name`: Name of the project for your runs. -- `entity`: Username or team name for sending runs. +- `project_name`: Name of the project for the new run; defaults to "Uncategorized" if not specified. +- `entity`: Username or team name for sending runs; defaults to your username if not specified. -#### Authentication Methods -1. **Basic Authentication** (not recommended for production): - ```shell - zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb --entity= --project_name= --api_key= - zenml stack register custom_stack -e wandb_experiment_tracker ... --set - ``` +#### Basic Authentication (Not Recommended for Production) +```shell +zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \ + --entity= --project_name= --api_key= -2. **ZenML Secret (Recommended)**: - Create a secret for secure storage: - ```shell - zenml secret create wandb_secret --entity= --project_name= --api_key= - ``` - Register the tracker using the secret: - ```shell - zenml experiment-tracker register wandb_tracker --flavor=wandb --entity={{wandb_secret.entity}} --project_name={{wandb_secret.project_name}} --api_key={{wandb_secret.api_key}} - ``` +zenml stack register custom_stack -e wandb_experiment_tracker ... --set +``` + +#### ZenML Secret (Recommended) +Create a ZenML secret to store credentials securely: +```shell +zenml secret create wandb_secret \ + --entity= \ + --project_name= \ + --api_key= +``` +Then register the tracker: +```shell +zenml experiment-tracker register wandb_tracker \ + --flavor=wandb \ + --entity={{wandb_secret.entity}} \ + --project_name={{wandb_secret.project_name}} \ + --api_key={{wandb_secret.api_key}} +``` ## Usage -To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use W&B logging: +To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator: ```python import wandb @@ -7421,13 +7547,16 @@ from wandb.integration.keras import WandbCallback @step(experiment_tracker="") def tf_trainer(...): + ... model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"": metric}) ``` -You can dynamically reference the active experiment tracker: +Alternatively, use the Client to dynamically reference the active stack's experiment tracker: + ```python from zenml.client import Client + experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) @@ -7436,19 +7565,22 @@ def tf_trainer(...): ``` ### W&B UI -Each ZenML step using W&B creates a separate experiment run, accessible via the W&B UI. The URL for a specific run can be retrieved as follows: +Each ZenML step using W&B creates a separate experiment run, viewable in the W&B UI. Access the tracking URL via the step's metadata: + ```python -last_run = client.get_pipeline("").last_run +from zenml.client import Client + tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ### Additional Configuration -You can customize the W&B experiment tracker with `WandbExperimentTrackerSettings`: +You can customize the W&B experiment tracker by passing `WandbExperimentTrackerSettings`: + ```python from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings -wandb_settings = WandbExperimentTrackerSettings(settings=wandb.Settings(...), tags=["some_tag"]) +wandb_settings = WandbExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": wandb_settings}) def my_step(...): @@ -7456,11 +7588,11 @@ def my_step(...): ``` ## Full Code Example -A complete example demonstrating the integration: +Here’s a complete example using the W&B integration with ZenML: + ```python from zenml import pipeline, step from zenml.client import Client -from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset import wandb @@ -7470,12 +7602,13 @@ experiment_tracker = Client().active_stack.experiment_tracker @step def prepare_data(): dataset = load_dataset("imdb") - return dataset["train"].shuffle(seed=42).select(range(1000)), dataset["test"].shuffle(seed=42).select(range(100)) + ... + return train_dataset, eval_dataset @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) - training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16) + training_args = TrainingArguments(...) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": trainer.evaluate()}) @@ -7486,40 +7619,37 @@ def fine_tuning_pipeline(): train_model(train_dataset, eval_dataset) if __name__ == "__main__": - wandb_settings = WandbExperimentTrackerSettings(tags=["distilbert", "imdb"]) - fine_tuning_pipeline.with_options(settings={"experiment_tracker": wandb_settings})() + fine_tuning_pipeline()() ``` -For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). +For further details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/experiment-trackers/vertexai.md === -### Vertex AI Experiment Tracker Overview +# Vertex AI Experiment Tracker Overview -The Vertex AI Experiment Tracker is a component of the ZenML integration that utilizes the Vertex AI tracking service for logging and visualizing pipeline step information (models, parameters, metrics). It is ideal for iterative ML experimentation and can also be used for automated pipeline runs. +The Vertex AI Experiment Tracker is a component of the ZenML framework that integrates with Google Cloud's Vertex AI to log and visualize experiment data from machine learning pipelines. It is particularly useful during the iterative ML experimentation phase and can also track results from automated pipeline runs. -#### Use Cases -- **Continuity**: For users already employing Vertex AI for experiment tracking and transitioning to MLOps workflows with ZenML. -- **Visualization**: For those seeking an interactive way to navigate results from ZenML pipeline runs. -- **Integration**: For building ML workflows within the Google Cloud ecosystem. - -Consider other Experiment Tracker flavors if you are unfamiliar with Vertex AI or using different cloud providers. - -### Configuration +## Use Cases +- Continuation of experiment tracking within Vertex AI for existing projects transitioning to MLOps with ZenML. +- Enhanced visualization of ZenML pipeline results (models, metrics, datasets). +- Integration with Google Cloud services for those building ML workflows in the GCP ecosystem. -To set up the Vertex AI Experiment Tracker, install the GCP integration: +## Configuration +To use the Vertex AI Experiment Tracker, install the GCP integration: ```shell zenml integration install gcp -y ``` -#### Configuration Options -- **project**: GCP project name (optional, inferred if None). -- **location**: GCP location for experiments (defaults to us-central1). -- **staging_bucket**: GCS bucket for staging artifacts (format: gs://...). -- **service_account_path**: Path to the service account credential JSON file (optional). +### Configuration Options +Key configuration options for the tracker include: +- `project`: GCP project name (inferred if None). +- `location`: GCP location for experiments (default: us-central1). +- `staging_bucket`: GCS bucket for staging artifacts (format: gs://...). +- `service_account_path`: Path to service account JSON for authentication. Register the tracker: @@ -7534,49 +7664,46 @@ zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` ### Authentication Methods - -1. **Implicit Authentication**: Quick local setup using `gcloud auth login`. Not recommended for production. +1. **Implicit Authentication**: Quick local setup using `gcloud` CLI. Not recommended for production. -2. **GCP Service Connector (Recommended)**: Provides auto-configuration and security for long-lived credentials. Register using: +2. **GCP Service Connector** (recommended): Use for better security and configuration management. -```sh -zenml service-connector register --type gcp -i -``` - -After setting up the connector, register the tracker: + Register a GCP Service Connector: -```shell -zenml experiment-tracker register \ - --flavor=vertex \ - --project= \ - --location= \ - --staging_bucket=gs:// + ```shell + zenml service-connector register --type gcp -i + ``` -zenml experiment-tracker connect --connector -``` + Register the tracker with the connector: -3. **GCP Credentials**: Generate a GCP Service Account Key, store it in a ZenML Secret, and reference it: + ```shell + zenml experiment-tracker register \ + --flavor=vertex \ + --project= \ + --location= \ + --staging_bucket=gs:// -```shell -zenml experiment-tracker register \ - --flavor=vertex \ - --project= \ - --location= \ - --staging_bucket=gs:// \ - --service_account_path=path/to/service_account_key.json -``` + zenml experiment-tracker connect --connector + ``` -### Usage +3. **GCP Credentials**: Use a service account key stored in a ZenML secret for authentication. -To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator. Use Vertex AI's logging capabilities as follows: + Register the tracker with the service account: -#### Example 1: Logging Metrics + ```shell + zenml experiment-tracker register \ + --flavor=vertex \ + --project= \ + --location= \ + --staging_bucket=gs:// \ + --service_account_path=path/to/service_account_key.json + ``` -Install the required library: +## Usage +To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator. -```bash -pip install google-cloud-aiplatform[autologging] -``` +### Example 1: Logging Metrics +Use built-in methods to log metrics: ```python from google.cloud import aiplatform @@ -7587,95 +7714,104 @@ class VertexAICallback(tf.keras.callbacks.Callback): aiplatform.log_time_series_metrics(metrics=metrics, step=epoch) @step(experiment_tracker="") -def train_model(config, x_train, y_train, x_val, y_val): +def train_model(...): aiplatform.autolog() - model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=config.epochs, callbacks=[VertexAICallback()]) + model.fit(..., callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) aiplatform.log_params(...) ``` -#### Example 2: Uploading TensorBoard Logs - -Install the required library: - -```bash -pip install google-cloud-aiplatform[tensorboard] -``` +### Example 2: Uploading TensorBoard Logs +Integrate TensorBoard for detailed visualizations: ```python @step(experiment_tracker="") -def train_model(config, gcs_path, x_train, y_train, x_val, y_val): - tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=gcs_path, histogram_freq=1) - aiplatform.start_upload_tb_log(tensorboard_experiment_name="experiment_name", logdir=gcs_path) - model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=config.epochs, callbacks=[tensorboard_callback]) +def train_model(...): + aiplatform.start_upload_tb_log(...) + model.fit(...) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) aiplatform.log_params(...) ``` -#### Experiment Tracker UI - -Retrieve the URL of the Vertex AI experiment linked to a ZenML run: +### Dynamic Tracker Usage +Instead of hardcoding the tracker name, use the ZenML Client: ```python from zenml.client import Client -client = Client() -tracking_url = client.get_pipeline("").last_run.steps.get("").run_metadata["experiment_tracker_url"].value -print(tracking_url) +experiment_tracker = Client().active_stack.experiment_tracker + +@step(experiment_tracker=experiment_tracker.name) +def tf_trainer(...): + ... ``` -#### Additional Configuration +### Accessing Experiment Tracker UI +Retrieve the URL for the experiment linked to a ZenML run: + +```python +tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value +print(tracking_url) +``` -For further configuration, use `VertexExperimentTrackerSettings` to specify an experiment name or TensorBoard instance: +### Additional Configuration +Use `VertexExperimentTrackerSettings` for advanced configurations like specifying an experiment name or TensorBoard instance: ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings -vertexai_settings = VertexExperimentTrackerSettings(experiment="", experiment_tensorboard="TENSORBOARD_RESOURCE_NAME") +vertexai_settings = VertexExperimentTrackerSettings( + experiment="", + experiment_tensorboard="TENSORBOARD_RESOURCE_NAME" +) @step(experiment_tracker="", settings={"experiment_tracker": vertexai_settings}) -def step_one(data): +def step_one(data: np.ndarray): ... ``` -For more details, refer to the ZenML documentation on runtime configuration. +For further details on configuration, refer to the ZenML documentation. ================================================== === File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === -### Experiment Trackers in ZenML +# ZenML Experiment Trackers + +## Overview +Experiment Trackers in ZenML allow users to log detailed information about ML experiments, including models, datasets, and metrics. Each pipeline run is treated as an experiment, and results are stored through Experiment Tracker stack components, linking pipeline runs to experiments. -**Overview**: Experiment trackers log detailed information about ML experiments, including models, datasets, metrics, and parameters, enabling users to visualize and compare results across runs. In ZenML, each pipeline run is treated as an experiment, with results stored through Experiment Tracker components. +### Key Concepts +- **Experiment Tracker**: An optional stack component registered in your ZenML stack. +- **Artifact Store**: Mandatory component that records artifact information circulated through pipelines. -**Key Points**: -- **Integration**: Experiment Trackers are optional stack components that must be registered as part of a ZenML stack. ZenML also tracks artifacts via the mandatory Artifact Store. -- **Usability**: While ZenML records artifacts programmatically, Experiment Trackers provide user-friendly UIs for browsing and visualizing logged information, making them ideal for enhancing ZenML's capabilities. - -**Architecture**: Experiment Trackers fit into the ZenML stack architecture, allowing for integration with various tracking tools. +### When to Use +Experiment Trackers enhance usability by providing a visual interface for browsing and visualizing logged information, making them preferable when you need intuitive interaction with experiment data. + +### Architecture +Experiment Trackers integrate into the ZenML stack, as shown in the architecture diagram. -**Available Experiment Tracker Flavors**: +### Available Flavors +ZenML supports various Experiment Tracker integrations: | Tracker | Flavor | Integration | Notes | |---------|--------|-------------|-------| | [Comet](comet.md) | `comet` | `comet` | Adds Comet tracking capabilities | | [MLflow](mlflow.md) | `mlflow` | `mlflow` | Adds MLflow tracking capabilities | | [Neptune](neptune.md) | `neptune` | `neptune` | Adds Neptune tracking capabilities | | [Weights & Biases](wandb.md) | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | -| [Custom Implementation](custom.md) | _custom_ | | Custom tracking solutions | +| [Custom Implementation](custom.md) | _custom_ | | Custom tracking options | -**Command to List Flavors**: +To list available flavors, use: ```shell zenml experiment-tracker flavor list ``` -**Usage Steps**: -1. Configure and add an Experiment Tracker to your ZenML stack. -2. Enable the tracker for specific pipeline steps using decorators. -3. Log information (models, metrics, data) explicitly within the steps. -4. Access the Experiment Tracker UI to visualize logged information. - -**Accessing Experiment Tracker UI**: +### Usage Steps +1. **Configure and Add**: Add an Experiment Tracker to your ZenML stack. +2. **Enable for Steps**: Decorate individual pipeline steps to enable the Experiment Tracker. +3. **Log Information**: Explicitly log models, metrics, and data within your steps. +4. **Access UI**: Retrieve the Experiment Tracker UI URL for a specific step: ```python from zenml.client import Client @@ -7684,58 +7820,64 @@ step = pipeline_run.steps[""] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` -**Note**: If a ZenML pipeline step fails, the corresponding experiment run will be marked as failed automatically. - -For detailed usage of specific Experiment Tracker flavors, refer to the respective documentation. +### Notes +- Experiment trackers automatically mark runs as failed if the corresponding ZenML pipeline step fails. +- Refer to the specific documentation for each Experiment Tracker flavor for detailed usage instructions. ================================================== === File: docs/book/component-guide/experiment-trackers/neptune.md === -### Neptune Experiment Tracker Overview +# Neptune Experiment Tracker with ZenML -The Neptune Experiment Tracker, integrated with ZenML, utilizes [neptune.ai](https://neptune.ai/product/experiment-tracking) for logging and visualizing pipeline information (models, parameters, metrics). It's beneficial for: +The Neptune Experiment Tracker integrates with [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize pipeline step information (models, parameters, metrics) during ML experimentation. -- Continuity in tracking experiment results with neptune.ai while adopting MLOps best practices in ZenML. -- Enhanced visualization of results from ZenML pipeline runs. -- Sharing logged artifacts and metrics with teams or stakeholders. - -If unfamiliar with neptune.ai, consider other [Experiment Tracker flavors](./experiment-trackers.md#experiment-tracker-flavors). +## Use Cases +Utilize the Neptune Experiment Tracker if: +- You are already using neptune.ai and want to integrate it with ZenML. +- You prefer a visual interface for navigating ZenML pipeline results. +- You wish to share logged artifacts and metrics with your team or stakeholders. -### Deployment +If you are new to neptune.ai, consider using another Experiment Tracker flavor. +## Deployment To deploy the Neptune Experiment Tracker, install the integration: ```shell zenml integration install neptune -y ``` -**Authentication Methods:** +### Authentication +Configure the following credentials: +- `api_token`: Your Neptune API key (create a free account [here](https://app.neptune.ai/register)). +- `project`: The project name in the format "workspace-name/project-name". -1. **ZenML Secret (Recommended)**: Store credentials securely. - ```shell - zenml secret create neptune_secret --api_token= - ``` +#### Recommended: ZenML Secret +Store credentials securely using a ZenML secret: - Register the tracker: - ```shell - zenml experiment-tracker register neptune_experiment_tracker \ - --flavor=neptune \ - --project= \ - --api_token={{neptune_secret.api_token}} - zenml stack register neptune_stack -e neptune_experiment_tracker ... --set - ``` +```shell +zenml secret create neptune_secret --api_token= +``` -2. **Basic Authentication**: Directly configure credentials (not recommended for production). - ```shell - zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ - --project= --api_token= - zenml stack register neptune_stack -e neptune_experiment_tracker ... --set - ``` +Then, register the experiment tracker: -### Usage +```shell +zenml experiment-tracker register neptune_experiment_tracker \ + --flavor=neptune \ + --project= \ + --api_token={{neptune_secret.api_token}} +``` + +#### Basic Authentication (Not Recommended) +Directly configure credentials (not secure): + +```shell +zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ + --project= --api_token= +``` -To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and fetch the Neptune run object: +## Usage +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and fetch the Neptune run object: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run @@ -7752,64 +7894,71 @@ def train_model() -> SVC: neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} + return model ``` -#### Logging Metadata - +### Logging Metadata Use `get_step_context` to log ZenML metadata: ```python -from zenml import get_step_context - @step(experiment_tracker="neptune_tracker") def my_step(): neptune_run = get_neptune_run() context = get_step_context() + neptune_run["pipeline_metadata"] = context.pipeline_run.get_metadata().dict() + neptune_run[f"step_metadata/{context.step_name}"] = context.step_run.get_metadata().dict() ``` -#### Adding Tags - -Utilize `NeptuneExperimentTrackerSettings` to add tags: +### Adding Tags +Use `NeptuneExperimentTrackerSettings` to add tags: ```python from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) -``` - -### Neptune UI -Neptune provides a web-based UI for tracking experiments. Each pipeline run is logged as a separate experiment, accessible via the console or the metadata tab of any step using the tracker. +@step(experiment_tracker="", settings={"experiment_tracker": neptune_settings}) +def my_step(): + ... +``` -### Full Code Example +## Neptune UI +Access a web-based UI to view tracked experiments. The URL for the Neptune run is printed in the console when a run is initialized. Each pipeline run is logged as a separate experiment in Neptune. -Here’s a complete example demonstrating the integration: +## Full Code Example +Here’s a complete example of using the Neptune integration with ZenML: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run -from zenml import pipeline, step +from zenml import step, pipeline from sklearn.model_selection import train_test_split from sklearn.svm import SVC +from sklearn.datasets import load_iris from zenml.client import Client -@step(experiment_tracker=Client().active_stack.experiment_tracker.name) +@step(experiment_tracker="neptune_experiment_tracker") def train_model() -> SVC: iris = load_iris() + X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) model = SVC(kernel="rbf", C=1.0) - model.fit(iris.data, iris.target) + model.fit(X_train, y_train) + neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} + return model -@step(experiment_tracker=Client().active_stack.experiment_tracker.name) +@step(experiment_tracker="neptune_experiment_tracker") def evaluate_model(model: SVC): iris = load_iris() _, X_test, _, y_test = train_test_split(iris.data, iris.target, test_size=0.2) accuracy = model.score(X_test, y_test) + neptune_run = get_neptune_run() neptune_run["metrics/accuracy"] = accuracy + return accuracy @pipeline @@ -7821,26 +7970,32 @@ if __name__ == "__main__": ml_pipeline() ``` -### Further Reading - -For more details, refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/). +## Further Reading +For more details, check [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/). ================================================== === File: docs/book/component-guide/experiment-trackers/custom.md === -### Develop a Custom Experiment Tracker +# Custom Experiment Tracker Development in ZenML -#### Overview -To create a custom experiment tracker in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for Experiment Trackers is under development, and extensions are currently not recommended. +## Overview +This documentation outlines the process for developing a custom experiment tracker in ZenML. For the latest updates, refer to the [current ZenML documentation](https://docs.zenml.io). -#### Steps to Build a Custom Experiment Tracker -1. **Create a Tracker Class**: Inherit from `BaseExperimentTracker` and implement the abstract methods. -2. **Configuration Class**: If needed, create a class inheriting from `BaseExperimentTrackerConfig` to define configuration parameters. -3. **Combine Implementation and Configuration**: Inherit from `BaseExperimentTrackerFlavor`. +## Prerequisites +Familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) in ZenML. + +## Important Notes +- The base abstraction for the Experiment Tracker is under development. Avoid extending it until the release. +- You can use existing flavors or implement your own, but be prepared for potential refactoring later. -#### Registering the Tracker -Use the CLI to register your custom flavor with the following command, ensuring to use dot notation for the flavor class: +## Steps to Create a Custom Experiment Tracker +1. **Create a Tracker Class**: Inherit from `BaseExperimentTracker` and implement the required abstract methods. +2. **Configuration Class**: Inherit from `BaseExperimentTrackerConfig` to define configuration parameters. +3. **Combine Classes**: Inherit from `BaseExperimentTrackerFlavor` to integrate the implementation and configuration. + +### Registration +Register your custom flavor using the CLI with the following command, ensuring to use dot notation: ```shell zenml experiment-tracker flavor register @@ -7852,153 +8007,169 @@ For example, if your flavor class is in `flavors/my_flavor.py`: zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor ``` -#### Best Practices -- Initialize ZenML at the root of your repository using `zenml init` to avoid resolution issues. -- After registration, verify your flavor is available with: +### Best Practices +- Initialize ZenML at the root of your repository using `zenml init` to ensure proper resolution of the flavor class. + +### Verification +Check the list of available flavors: ```shell zenml experiment-tracker flavor list ``` -#### Important Notes -- The **CustomExperimentTrackerFlavor** class is used during flavor creation. -- The **CustomExperimentTrackerConfig** class validates user input during stack component registration. -- The **CustomExperimentTracker** is utilized when the component is in use, allowing separation of configuration from implementation. +## Class Interaction +- **CustomExperimentTrackerFlavor**: Used during flavor creation via CLI. +- **CustomExperimentTrackerConfig**: Validates user input during stack component registration. +- **CustomExperimentTracker**: Engaged when the component is in use, allowing separation of configuration from implementation. -This design enables registration of flavors and components without requiring all dependencies to be installed locally. +This structure enables registration of flavors and components even if their dependencies are not installed locally. ================================================== === File: docs/book/component-guide/experiment-trackers/mlflow.md === -### MLflow Experiment Tracker Overview +# MLflow Experiment Tracker with ZenML -The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service to log and visualize pipeline step information (models, parameters, metrics). +The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service for logging and visualizing pipeline step data (models, parameters, metrics). -#### Use Cases -- Continue using MLflow for tracking as you adopt MLOps practices with ZenML. -- Gain a visually interactive way to navigate results from ZenML pipeline runs. -- Connect to an existing shared MLflow Tracking service for artifact and metric sharing. +## Use Cases +Use the MLflow Experiment Tracker if: +- You are already using MLflow for experiment tracking and want to integrate it with ZenML. +- You seek a visually interactive way to navigate results from ZenML pipeline runs. +- Your team has a shared MLflow Tracking service and you want to connect ZenML to it. -#### Configuration Steps -1. **Install MLflow Integration**: - ```shell - zenml integration install mlflow -y - ``` +If unfamiliar with MLflow, consider other Experiment Tracker flavors. -2. **Deployment Scenarios**: - - **Localhost**: Requires a local Artifact Store, suitable for local runs only. - ```shell - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow - zenml stack register custom_stack -e mlflow_experiment_tracker ... --set - ``` - - **Remote Tracking Server**: Requires authentication parameters. - - **Databricks**: Requires authentication parameters specific to Databricks. +## Configuration +To configure the MLflow Experiment Tracker, install the integration: -#### Authentication Methods -- **Basic Authentication** (not recommended for production): - ```shell - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow --tracking_uri= --tracking_token= - ``` -- **ZenML Secret (Recommended)**: Store credentials securely. +```shell +zenml integration install mlflow -y +``` + +### Deployment Scenarios +1. **Localhost (default)**: Requires a local Artifact Store. Suitable for local runs only. ```shell - zenml secret create mlflow_secret --username= --password= - zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow + zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` -#### Usage -To log information from a ZenML pipeline step: +2. **Remote Tracking Server**: Requires a deployed MLflow Tracking Server with authentication parameters. + +3. **Databricks**: Requires a Databricks workspace and authentication parameters. + +### Authentication Methods +Configure credentials for a remote MLflow tracking server: +- `tracking_uri`: URL of the MLflow server (use `"databricks"` for Databricks). +- `tracking_username`/`tracking_password` or `tracking_token`. +- `tracking_insecure_tls` (optional). +- `databricks_host`: Required if using Databricks. + +#### Basic Authentication +Not recommended for production due to security concerns: +```shell +zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ + --tracking_uri= --tracking_token= +``` + +#### ZenML Secret (Recommended) +Store credentials securely: +```shell +zenml secret create mlflow_secret --username= --password= +``` +Then reference the secret: +```shell +zenml experiment-tracker register mlflow --flavor=mlflow \ + --tracking_username={{mlflow_secret.username}} \ + --tracking_password={{mlflow_secret.password}} ... +``` + +## Usage +To log information in a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use MLflow's logging capabilities: ```python import mlflow @step(experiment_tracker="") -def tf_trainer(x_train: np.ndarray, y_train: np.ndarray) -> tf.keras.Model: +def tf_trainer(x_train, y_train): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) return model ``` -You can dynamically reference the active stack's experiment tracker: -```python -from zenml.client import Client -experiment_tracker = Client().active_stack.experiment_tracker - -@step(experiment_tracker=experiment_tracker.name) -def tf_trainer(...): - ... -``` -#### MLflow UI -Access the MLflow UI to view tracked experiments. The URL can be retrieved from the step metadata: +### MLflow UI +Access the MLflow UI for experiment details. Get the tracking URL from the step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` -For local MLflow, start the UI: +For local MLflow, start the UI with: ```bash mlflow ui --backend-store-uri ``` -#### Additional Configuration -You can pass `MLFlowExperimentTrackerSettings` for nested runs or additional tags: +### Additional Configuration +Use `MLFlowExperimentTrackerSettings` for nested runs or tags: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) -def step_one(data: np.ndarray) -> np.ndarray: +def step_one(data): ... ``` -For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.experiment_trackers.mlflow_experiment_tracker). +For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor.MLFlowExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/experiment-trackers/comet.md === -### Comet Experiment Tracker Overview +# Comet Experiment Tracker with ZenML -The Comet Experiment Tracker integrates with ZenML to log and visualize experiment data from machine learning pipelines using the Comet platform. It is beneficial for tracking results during ML experimentation and can also be used in production workflows. +The Comet Experiment Tracker integrates with ZenML to log and visualize pipeline information using the Comet platform. It is useful for tracking ML experiments and can also be adapted for automated pipeline runs. -### When to Use Comet Experiment Tracker -- If you are already using Comet for experiment tracking and want to continue as you adopt MLOps practices with ZenML. +## When to Use Comet +- If you are already using Comet for tracking and want to continue with ZenML. - If you prefer a visually interactive way to navigate results from ZenML pipelines. -- If you want to share logged artifacts and metrics with your team or stakeholders. +- If you need to share logged artifacts and metrics with your team or stakeholders. -### Deployment Steps -1. **Install Comet Integration**: +## Deployment +To deploy the Comet Experiment Tracker, install the integration: + +```bash +zenml integration install comet -y +``` + +### Authentication Methods +1. **ZenML Secret (Recommended)**: Store credentials securely. ```bash - zenml integration install comet -y + zenml secret create comet_secret \ + --workspace= \ + --project_name= \ + --api_key= ``` -2. **Configure Authentication**: - - **ZenML Secret (Recommended)**: Store credentials securely. - ```bash - zenml secret create comet_secret \ - --workspace= \ - --project_name= \ - --api_key= - ``` - - **Basic Authentication**: Directly configure credentials (not recommended for production). - ```bash - zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ - --workspace= --project_name= --api_key= - ``` - -3. **Register Experiment Tracker**: + Register the tracker: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} - zenml stack register custom_stack -e comet_experiment_tracker ... --set ``` -### Usage -To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and use Comet's logging capabilities: +2. **Basic Authentication**: Directly configure credentials (not recommended for production). + ```bash + zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ + --workspace= --project_name= --api_key= + ``` + +## Usage +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator: + ```python from zenml.client import Client @@ -8011,62 +8182,49 @@ def my_step(): ``` ### Comet UI -Each ZenML step using Comet creates a separate experiment viewable in the Comet UI. You can find the experiment URL in the step's metadata: +Each ZenML step using Comet creates a separate experiment viewable in the Comet UI. The experiment URL can be accessed via step metadata: + ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` -### Full Code Example +## Full Code Example +Here is a simplified example of a ZenML pipeline using Comet: + ```python -from comet_ml.integration.sklearn import log_model -import numpy as np -from sklearn.datasets import load_iris -from sklearn.model_selection import train_test_split -from sklearn.preprocessing import StandardScaler -from sklearn.svm import SVC -from sklearn.metrics import accuracy_score from zenml import pipeline, step from zenml.client import Client +from zenml.integrations.comet.experiment_trackers import CometExperimentTracker experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): - iris = load_iris() - return iris.data, iris.target - -@step -def preprocess_data(X, y): - return train_test_split(X, y, test_size=0.2, random_state=42) + # Load data logic + return X, y @step(experiment_tracker=experiment_tracker.name) def train_model(X_train, y_train): - model = SVC().fit(X_train, y_train) - log_model(experiment=experiment_tracker.experiment, model_name="SVC", model=model) + model.fit(X_train, y_train) + experiment_tracker.experiment.log_model(...) return model -@step(experiment_tracker=experiment_tracker.name) -def evaluate_model(model, X_test, y_test): - accuracy = accuracy_score(y_test, model.predict(X_test)) - experiment_tracker.log_metrics({"accuracy": accuracy}) - return accuracy - @pipeline def iris_classification_pipeline(): X, y = load_data() - X_train, X_test, y_train, y_test = preprocess_data(X, y) model = train_model(X_train, y_train) - evaluate_model(model, X_test, y_test) if __name__ == "__main__": - iris_classification_pipeline() + iris_classification_pipeline()() ``` -### Additional Configuration -You can provide additional tags for your experiments using `CometExperimentTrackerSettings`: +## Additional Configuration +You can pass `CometExperimentTrackerSettings` for additional tags and configurations: + ```python comet_settings = CometExperimentTrackerSettings(tags=["some_tag"]) + @step(experiment_tracker="", settings={"experiment_tracker": comet_settings}) def my_step(): ... @@ -8080,181 +8238,125 @@ For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/late # Model Registries -Model registries are centralized solutions for managing and tracking machine learning models throughout their development and deployment stages. They enable version control, configuration tracking, and reproducibility by storing metadata such as version, configuration, and performance metrics. - -In ZenML, model registries are Stack Components that facilitate the retrieval, loading, and deployment of trained models, along with information on the training pipeline. +Model registries are centralized storage solutions for managing and tracking machine learning models throughout their development and deployment stages. They facilitate version control and reproducibility by storing metadata like version, configuration, and metrics. In ZenML, model registries are Stack Components that simplify the retrieval, loading, and deployment of trained models, while also providing information on the training pipeline and reproduction methods. ### Key Concepts -- **RegisteredModel**: A logical grouping of models to track different versions. It includes the model's name, description, and tags. +- **RegisteredModel**: A logical grouping of models to track different versions, including metadata such as name, description, and tags. It can be user-created or auto-generated when a new model is logged. -- **RegistryModelVersion**: A specific version of a model, identified by a unique version number. It contains metadata about the model, including its name, description, tags, metrics, and references to the pipeline name, run ID, and step name. +- **RegistryModelVersion**: A specific model version identified by a unique version number. It includes metadata like name, description, tags, metrics, and references to the model artifact, pipeline name, pipeline run ID, and step name. -- **ModelVersionStage**: Represents the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. This tracks the lifecycle of the model. +- **ModelVersionStage**: Represents the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. This tracks the lifecycle of a model version. -### When to Use +### Usage -ZenML's Artifact Store manages pipeline artifacts programmatically, but model registries provide a visual interface for managing model metadata, especially useful with remote orchestrators. They simplify the retrieval, loading, and deployment of models, making them ideal for centralized model state management. +ZenML's Artifact Store manages pipeline artifacts programmatically, but model registries provide a visual interface for managing model metadata, especially with remote orchestrators. They are ideal for centralizing model state management and facilitating easy retrieval and deployment. -### Model Registry Integration +### Integration in ZenML Stack -Model registries are optional components in the ZenML stack and require an experiment tracker. If not using an experiment tracker, models can still be stored, but retrieval must be manual. +Model registries are optional components integrated with experiment trackers. To use a model registry, it must match the flavor of the experiment tracker. If you are not using an experiment tracker, models can still be stored in ZenML, but retrieval must be manual. -#### Available Flavors +#### Model Registry Flavors | Model Registry | Flavor | Integration | Notes | |----------------|--------|-------------|-------| -| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Integrate MLflow as Model Registry | -| [Custom Implementation](custom.md) | _custom_ | | Custom options available | +| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Add MLflow as Model Registry to your stack | +| [Custom Implementation](custom.md) | _custom_ | | _custom_ | -To list available flavors, use: +To view available flavors, use: ```shell zenml model-registry flavor list ``` -### Usage +### Registration Methods -To use model registries: -1. Register a model registry in your stack, matching the flavor of your experiment tracker. -2. Register trained models via: - - Built-in pipeline step - - ZenML CLI - - Model registry UI -3. Retrieve and load models for deployment or experimentation. +To register a model in the model registry, you can use: +1. Built-in step in the pipeline. +2. ZenML CLI for command-line registration. +3. Model registry UI for registration. -For further details, refer to the documentation on [fetching runs](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). +After registration, models can be retrieved and loaded for deployment or further experimentation. ================================================== === File: docs/book/component-guide/model-registries/custom.md === -### Custom Model Registry Development in ZenML - -#### Overview -This documentation provides guidance on developing a custom model registry in ZenML. Familiarity with ZenML's component flavor concepts is recommended before proceeding. +### Summary: Developing a Custom Model Registry in ZenML -#### Important Notes -- The model registry stack component is new and may undergo API changes. Feedback on the base abstraction is encouraged via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues/new/choose). +This documentation outlines the process for creating a custom model registry in ZenML. It is crucial to understand the general concepts of ZenML's component flavors before diving into specifics. #### Base Abstraction -The `BaseModelRegistry` is the abstract class for creating a custom model registry. It provides a basic interface for model registration and retrieval. - -**Key Components:** - -```python -from abc import ABC, abstractmethod -from typing import Any, Dict, List, Optional - -class BaseModelRegistryConfig(StackComponentConfig): - """Base config for model registries.""" - -class BaseModelRegistry(StackComponent, ABC): - """Base class for ZenML model registries.""" - - @abstractmethod - def register_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: - """Registers a model.""" - - @abstractmethod - def delete_model(self, name: str) -> None: - """Deletes a registered model.""" - - @abstractmethod - def update_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: - """Updates a registered model.""" - - @abstractmethod - def get_model(self, name: str) -> RegisteredModel: - """Gets a registered model.""" - - @abstractmethod - def list_models(self, name: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> List[RegisteredModel]: - """Lists registered models.""" - - # Model Version Methods - @abstractmethod - def register_model_version(self, name: str, version: Optional[str] = None, **kwargs: Any) -> RegistryModelVersion: - """Registers a model version.""" - - @abstractmethod - def delete_model_version(self, name: str, version: str) -> None: - """Deletes a model version.""" - - @abstractmethod - def update_model_version(self, name: str, version: str, **kwargs: Any) -> RegistryModelVersion: - """Updates a model version.""" - - @abstractmethod - def list_model_versions(self, name: Optional[str] = None, **kwargs: Any) -> List[RegistryModelVersion]: - """Lists model versions.""" - - @abstractmethod - def get_model_version(self, name: str, version: str) -> RegistryModelVersion: - """Gets a model version.""" - - @abstractmethod - def load_model_version(self, name: str, version: str, **kwargs: Any) -> Any: - """Loads a model version.""" - - @abstractmethod - def get_model_uri_artifact_store(self, model_version: RegistryModelVersion) -> str: - """Gets the URI artifact store for a model version.""" -``` - -#### Creating a Custom Model Registry -To create a custom model registry flavor: -1. Understand core concepts of model registries. -2. Inherit from `BaseModelRegistry` and implement abstract methods. -3. Create a `ModelRegistryConfig` class extending `BaseModelRegistryConfig`. -4. Combine implementation and configuration by inheriting from `BaseModelRegistryFlavor`. - -**Registering the Flavor:** +The `BaseModelRegistry` class serves as the abstract base for custom model registries, providing a generic interface for model registration and retrieval. Key methods include: + +- **Model Registration Methods**: + - `register_model(name, description, tags)`: Registers a model. + - `delete_model(name)`: Deletes a registered model. + - `update_model(name, description, tags)`: Updates a registered model. + - `get_model(name)`: Retrieves a registered model. + - `list_models(name, tags)`: Lists all registered models. + +- **Model Version Methods**: + - `register_model_version(name, description, tags, model_source_uri, version, metadata, ...)`: Registers a model version. + - `delete_model_version(name, version)`: Deletes a model version. + - `update_model_version(name, version, description, tags, stage)`: Updates a model version. + - `list_model_versions(name, model_source_uri, tags, ...)`: Lists all model versions for a registered model. + - `get_model_version(name, version)`: Retrieves a model version. + - `load_model_version(name, version, ...)`: Loads a model version. + - `get_model_uri_artifact_store(model_version)`: Gets the URI artifact store for a model version. + +#### Steps to Build a Custom Model Registry +1. Familiarize yourself with core model registry concepts. +2. Create a class inheriting from `BaseModelRegistry` and implement the abstract methods. +3. Define a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig` for additional parameters. +4. Combine the implementation and configuration by inheriting from `BaseModelRegistryFlavor`. + +To register your custom model registry, use the CLI command: ```shell zenml model-registry flavor register ``` -#### Workflow Integration -- **CustomModelRegistryFlavor** is used during flavor creation. -- **CustomModelRegistryConfig** is used for validation during stack component registration. -- **CustomModelRegistry** is utilized when the component is in use. +#### Important Notes +- The `CustomModelRegistryFlavor` is utilized during flavor creation via CLI. +- The `CustomModelRegistryConfig` is used for validating user inputs during registration. +- The `CustomModelRegistry` is invoked when the component is in use, separating configuration from implementation. -#### Example -For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). +For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). + +This documentation is subject to updates as the model registry component evolves. For any issues or feedback, contact the ZenML team via [Slack](https://zenml.io/slack) or GitHub. ================================================== === File: docs/book/component-guide/model-registries/mlflow.md === -### MLflow Model Registry Overview +# Managing MLFlow Logged Models and Artifacts -MLflow is a tool for tracking experiments, managing models, and deploying them across environments. ZenML integrates with MLflow, providing an Experiment Tracker and Model Deployer. The MLflow model registry allows for managing and tracking ML models and their artifacts, offering a user interface for browsing. +## Overview +MLflow is a tool for tracking experiments, managing models, and deploying them. ZenML integrates with MLflow, providing an Experiment Tracker and Model Deployer. The MLflow model registry helps manage and track ML models and artifacts, offering a user interface for browsing. -#### Use Cases +## Use Cases - Track different model versions during development and deployment. -- Manage deployments across various environments. -- Monitor and compare model performance over time. -- Simplify model deployment processes. +- Monitor model performance across environments. +- Simplify model deployment to production or staging. -#### Installation +## Deployment To use the MLflow model registry, install the MLflow integration: ```shell zenml integration install mlflow -y ``` -Register the MLflow model registry component in your stack: +Register the MLflow model registry component: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow zenml stack register custom_stack -r mlflow_model_registry ... --set ``` -**Note:** The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability in older versions. +**Note:** The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability. -#### Usage -You can register models in ZenML pipelines or manually via the CLI. - -**Registering Models in a Pipeline:** +## Usage +### Register Models in a Pipeline +Use the `mlflow_register_model_step` to register a model logged to MLflow: ```python from zenml import pipeline @@ -8266,15 +8368,16 @@ def mlflow_registry_training_pipeline(): mlflow_register_model_step(model=model, name="tensorflow-mnist-model") ``` -**Parameters for `mlflow_register_model_step`:** +**Parameters:** - `name`: Required model name. - `version`: Model version. - `trained_model_name`: Name of the model artifact in MLflow. - `model_source_uri`: Path to the model. - `description`: Model version description. -- `metadata`: Metadata for the model version. +- `metadata`: Metadata associated with the model version. -**Registering Models via CLI:** +### Register Models via CLI +To manually register models, use: ```shell zenml model-registry models register-version Tensorflow-model \ @@ -8286,65 +8389,68 @@ zenml model-registry models register-version Tensorflow-model \ --zenml-step-name="trainer" ``` -#### Interacting with Registered Models -- List all registered models: +### Interact with Registered Models +List all registered models: ```shell zenml model-registry models list ``` -- List versions of a specific model: +List versions of a specific model: ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` -- Get details of a specific model version: +Get details of a specific model version: ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` -- Delete a registered model or specific version: +### Deleting Models +To delete a registered model or a specific version: ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` -For further details, refer to the [MLflow model deployer documentation](../model-deployers/mlflow.md#deploy-from-model-registry) and the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). +For more details, refer to the [ZenML MLFlow SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/orchestrators/local-docker.md === -### Local Docker Orchestrator +# Local Docker Orchestrator in ZenML -The Local Docker Orchestrator is a built-in ZenML orchestrator that runs pipelines locally using Docker. +The Local Docker orchestrator is a built-in feature of ZenML that allows you to run pipelines locally in isolated Docker environments. -#### When to Use -- For running pipeline steps in isolated local environments. -- For debugging pipeline issues without incurring costs for remote infrastructure. +### When to Use +- For local execution of pipeline steps in isolated environments. +- For debugging pipeline issues without incurring costs of remote infrastructure. -#### Deployment -Ensure Docker is installed and running. Register the orchestrator and activate a stack with the following commands: +### Deployment +Ensure Docker is installed and running. + +### Usage +To register and activate the local Docker orchestrator in your stack, use the following commands: ```shell zenml orchestrator register --flavor=local_docker zenml stack register -o ... --set ``` -#### Running a Pipeline -Execute any ZenML pipeline using: +Run your ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` -#### Additional Configuration -You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes and [this page](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for specifying settings. +### Additional Configuration +You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) details. -Example of specifying CPU count (Windows only): +For example, to specify the CPU count (Windows only): ```python from zenml import step, pipeline @@ -8365,65 +8471,64 @@ def simple_pipeline(): return_one() ``` -#### Enabling CUDA for GPU -For GPU support, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA and customize settings for GPU acceleration. +### Enabling CUDA for GPU +To run steps on a GPU, follow the instructions in the [GPU training guide](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for optimal performance. ================================================== === File: docs/book/component-guide/orchestrators/lightning.md === -# Lightning AI Orchestrator Overview +### Summary: Orchestrating Pipelines on Lightning AI with ZenML -## Description -The Lightning AI Orchestrator, integrated with ZenML, enables running pipelines on Lightning AI's scalable infrastructure. It is designed for remote ZenML deployments and should not be used locally. +**Overview**: +The Lightning AI orchestrator integrates with ZenML to run pipelines on Lightning AI's infrastructure, utilizing its scalable compute resources and managed environment. This integration is intended for remote ZenML deployments only. -## When to Use -- Fast deployment on GPU instances. -- Existing use of Lightning AI for ML projects. -- Need for managed infrastructure for ML workflows. -- Simplified deployment and scaling of ML applications. -- Access to Lightning AI's optimizations. +**When to Use**: +- For quick execution of pipelines on GPU instances. +- If already using Lightning AI for machine learning projects. +- To leverage managed infrastructure for ML workflows. +- To benefit from Lightning AI's optimizations. -## Deployment Requirements +**Deployment Requirements**: - A Lightning AI account with credentials. -- No additional infrastructure deployment needed. +- No additional infrastructure deployment is needed. -## Functionality -- Archives the ZenML repository and uploads it to Lightning AI Studio. -- Uses `lightning-sdk` to create a new studio and run commands for environment setup. -- Supports async mode for background execution and status checking via ZenML Dashboard. -- Allows custom commands for environment setup before pipeline execution. -- Supports both CPU and GPU machine types. - -## Setup Instructions -1. Install the ZenML Lightning integration: - ```shell - zenml integration install lightning - ``` -2. Configure a remote artifact store. -3. Obtain Lightning AI credentials: - - `LIGHTNING_USER_ID` - - `LIGHTNING_API_KEY` - - Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` +**Operational Workflow**: +1. ZenML archives the current repository and uploads it to Lightning AI Studio. +2. Using `lightning-sdk`, ZenML creates a new studio and uploads the code. +3. Commands are executed via `studio.run()` to prepare the environment. +4. Pipelines can run in both CPU and GPU modes. -4. Register the orchestrator: - ```shell - zenml orchestrator register lightning_orchestrator \ - --flavor=lightning \ - --user_id= \ - --api_key= \ - --username= \ # optional - --teamspace= \ # optional - --organization= # optional - ``` +**Installation**: +To install the Lightning integration, run: +```shell +zenml integration install lightning +``` -5. Activate the stack: - ```bash - zenml stack register lightning_stack -o lightning_orchestrator ... --set - ``` +**Credentials Needed**: +- `LIGHTNING_USER_ID` +- `LIGHTNING_API_KEY` +- Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` + +**Setting Up Credentials**: +Retrieve credentials from your Lightning AI account under "Global Settings" > "Keys". Register the orchestrator with: +```shell +zenml orchestrator register lightning_orchestrator \ + --flavor=lightning \ + --user_id= \ + --api_key= \ + --username= \ # optional + --teamspace= \ # optional + --organization= # optional +``` + +**Registering and Activating Stack**: +```bash +zenml stack register lightning_stack -o lightning_orchestrator ... --set +``` -## Pipeline Configuration -Configure the orchestrator at the pipeline level: +**Pipeline Configuration**: +Use `LightningOrchestratorSettings` to configure the orchestrator: ```python from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings @@ -8439,14 +8544,14 @@ def my_pipeline(): ... ``` -## Running Pipelines -Execute a ZenML pipeline using: +**Running a Pipeline**: +Execute the pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` -## Monitoring -Monitor applications via the Lightning AI UI. Retrieve the UI URL for a specific pipeline run: +**Monitoring**: +Use Lightning AI's UI to monitor applications. Retrieve the UI URL for a pipeline run with: ```python from zenml.client import Client @@ -8454,56 +8559,48 @@ pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` -## Additional Configuration -Customize execution settings: +**Additional Configuration**: +Settings can be specified at both pipeline and step levels. For GPU usage, set the machine type accordingly: ```python lightning_settings = LightningOrchestratorSettings( - main_studio_name="my_studio", - machine_type="gpu", # Specify GPU type if needed + machine_type="gpu" # or specific types like `A10G` ) ``` +Refer to [Lightning AI's documentation](https://lightning.ai/docs/overview/studios/change-gpus) for available GPU types. -Settings can be applied at both pipeline and step levels. - -## GPU Usage -For GPU execution, specify a GPU-enabled machine type: -```python -lightning_settings = LightningOrchestratorSettings( - machine_type="A10G" # Example GPU type -) -``` -Refer to [Lightning AI documentation](https://lightning.ai/docs/overview/studios/change-gpus) for available GPU types. +This summary captures the essential details for using the Lightning AI orchestrator with ZenML, ensuring clarity and conciseness while retaining critical information. ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === -# HyperAI Orchestrator Summary +### HyperAI Orchestrator Overview -The **HyperAI Orchestrator** is designed for deploying pipelines on HyperAI instances within a remote ZenML deployment scenario. It is not suitable for local ZenML deployments. +The HyperAI orchestrator allows for the deployment of ZenML pipelines on HyperAI instances, a cloud compute platform for AI. It is intended for use in remote ZenML deployments only. -### When to Use +#### When to Use - For managed pipeline execution. - If you are a HyperAI customer. -### Prerequisites -- A running, internet-accessible HyperAI instance with SSH key-based access. -- A recent version of Docker with Docker Compose. -- NVIDIA Driver and NVIDIA Container Toolkit installed (optional for GPU use). +#### Prerequisites +1. A running HyperAI instance with internet accessibility and SSH key-based access. +2. Recent Docker version with Docker Compose. +3. NVIDIA Driver installed (optional but required for GPU usage). +4. NVIDIA Container Toolkit installed (optional for GPU usage). -### Functionality -- Utilizes Docker Compose to create and execute pipelines. -- Each ZenML pipeline step is defined as a service in a generated Docker Compose file. -- Supports scheduled pipelines using: - - **Cron expressions** for recurring runs. - - **Run once** scheduling for specific time execution. +#### Functionality +The orchestrator utilizes Docker Compose to create and execute a Docker Compose file for ZenML pipelines. Each pipeline step corresponds to a service in the file, using the `service_completed_successfully` condition to manage execution order. It can connect to a container registry for Docker image transfers. -### Deployment Steps -1. **Configure a HyperAI Service Connector** in ZenML: +#### Scheduled Pipelines +Supports: +- **Cron expressions** (`cron_expression`) for periodic runs (requires `crontab`). +- **Scheduled runs** (`run_once_start_time`) for one-time executions (requires `at`). + +#### Deployment Steps +1. **Configure HyperAI Service Connector**: ```shell zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= ``` - - Hostnames can be DNS names or IP addresses. 2. **Register the Orchestrator**: ```shell @@ -8516,35 +8613,35 @@ The **HyperAI Orchestrator** is designed for deploying pipelines on HyperAI inst python file_that_runs_a_zenml_pipeline.py ``` -### GPU Configuration -To utilize GPU acceleration, follow specific instructions to enable CUDA settings. +#### GPU Usage +For GPU-backed hardware, follow specific instructions to enable CUDA for optimal performance. -This summary encapsulates the key technical aspects of the HyperAI Orchestrator, ensuring clarity and conciseness for effective understanding. +For more details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/orchestrators/airflow.md === -### Airflow Orchestrator for ZenML Pipelines +### Airflow Orchestrator Overview -ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each step in a ZenML pipeline runs in a separate Docker container managed by Airflow. +ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step operates in a separate Docker container managed by Airflow. -#### When to Use Airflow +#### When to Use Airflow Orchestrator - Proven production-grade orchestrator. -- Already using Airflow. -- Need to run pipelines locally. -- Willing to deploy and maintain Airflow. +- Existing use of Airflow. +- Local pipeline execution. +- Willingness to deploy and maintain Airflow. #### Deployment Options -- **Local:** No additional setup required. -- **Remote:** Options include: +- **Local Deployment**: No additional setup required. +- **Remote Deployment**: Requires a remote ZenML deployment. Options include: - ZenML GCP Terraform module with Google Cloud Composer. - Managed services like Google Cloud Composer, Amazon MWAA, or Astronomer. - - Manual deployment (refer to [Airflow docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html)). + - Manual Airflow deployment (refer to official Airflow docs). -**Required Python Packages for Remote Deployment:** -- `pydantic~=2.7.1` -- `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` (based on the operator used). +**Python Packages Required**: +- `pydantic~=2.7.1`: For parsing and validating configuration files. +- `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes`: Depending on the operator used. #### Setup Instructions 1. Install ZenML Airflow integration: @@ -8558,42 +8655,41 @@ ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestrat zenml stack register -o ... --set ``` -**Local Airflow Server Setup:** -- Create a virtual environment: +#### Local Deployment Steps +1. Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` -- Set environment variables (optional): +2. Set environment variables (optional): + - `AIRFLOW_HOME`: Default is `~/airflow`. + - `AIRFLOW__CORE__DAGS_FOLDER`: Default is `/dags`. + - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default is 30 seconds. + + For MacOS, set: ```bash - export AIRFLOW_HOME=~/airflow - export AIRFLOW__CORE__DAGS_FOLDER=/dags - export AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL=30 + export no_proxy=* ``` -- Start the local Airflow server: +3. Start the Airflow server: ```bash airflow standalone ``` - -#### Running a Pipeline -Run the pipeline script: -```shell -python file_that_runs_a_zenml_pipeline.py -``` -This generates a `.zip` file representing the ZenML pipeline for Airflow. Copy the `.zip` file to the Airflow DAGs directory. - -To automate copying: -```shell -zenml orchestrator update --dag_output_dir= -``` +4. Run the ZenML pipeline: + ```shell + python file_that_runs_a_zenml_pipeline.py + ``` +5. Copy the generated `.zip` file to the Airflow DAGs directory or configure ZenML to do so automatically: + ```bash + zenml orchestrator update --dag_output_dir= + ``` #### Remote Deployment Considerations - Requires a remote ZenML server, deployed Airflow server, remote artifact store, and remote container registry. -- Running `pipeline.run()` creates a `.zip` file but does not execute the pipeline directly. +- Running a pipeline creates a `.zip` file for Airflow, which must be placed in the DAGs directory. #### Scheduling Pipelines -Schedule pipeline runs in Airflow: +Schedule pipeline runs with Airflow: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule @@ -8610,15 +8706,17 @@ scheduled_pipeline() ``` #### Airflow UI -Access the Airflow UI at [http://localhost:8080](http://localhost:8080) for monitoring pipeline runs. Default credentials: username `admin`, password found in `/standalone_admin_password.txt`. +Access the UI at [http://localhost:8080](http://localhost:8080). Default credentials: username `admin`, password in `/standalone_admin_password.txt`. #### Additional Configuration -Customize the Airflow orchestrator with `AirflowOrchestratorSettings` during pipeline definition. For GPU support, follow specific instructions to enable CUDA. +Use `AirflowOrchestratorSettings` for further configuration when defining or running pipelines. + +#### GPU Support +Follow specific instructions to enable CUDA for GPU acceleration. #### Using Different Airflow Operators -ZenML supports: -- `DockerOperator`: Runs Docker images on the same machine. -- `KubernetesPodOperator`: Runs Docker images in a Kubernetes cluster. +- **DockerOperator**: For local execution. +- **KubernetesPodOperator**: For execution in a Kubernetes cluster. Specify the operator: ```python @@ -8631,7 +8729,7 @@ airflow_settings = AirflowOrchestratorSettings( ``` #### Custom Operators and DAG Generators -You can specify custom operators and provide a custom DAG generator file for more control over DAG creation. Ensure it contains required classes and constants. +For custom operators, specify the operator path. To customize DAG generation, provide a custom DAG generator file that matches the original structure. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). @@ -8639,110 +8737,143 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr === File: docs/book/component-guide/orchestrators/sagemaker.md === -# AWS Sagemaker Orchestrator Summary +# AWS SageMaker Orchestrator Documentation Summary ## Overview -The **Sagemaker Orchestrator** integrates with [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) to provide a serverless ML workflow tool on AWS, allowing for production-ready, repeatable cloud orchestration with minimal setup. It is designed for remote ZenML deployments. +The ZenML SageMaker orchestrator integrates with [SageMaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) to facilitate serverless ML workflows on AWS. It provides a production-ready, repeatable cloud orchestrator with minimal setup. + +**Warning:** This component is designed for remote ZenML deployments; local deployments may cause unexpected behavior. ## When to Use -Use the Sagemaker orchestrator if: +Use the SageMaker orchestrator if: - You are using AWS. - You need a production-grade orchestrator with a UI for tracking pipeline runs. -- You prefer a managed and serverless solution. +- You prefer a managed, serverless solution for running pipelines. ## Functionality -The Sagemaker orchestrator creates a SageMaker `PipelineStep` for each ZenML pipeline step, currently supporting only processing jobs. +The SageMaker orchestrator creates a `PipelineStep` for each ZenML pipeline step, currently supporting only SageMaker Processing jobs. ## Deployment Requirements -1. Deploy ZenML to the cloud, ideally in the same region as Sagemaker. +1. Deploy ZenML to the cloud, ideally in the same region as SageMaker. 2. Ensure connection to the remote ZenML server. -3. Configure IAM roles with `AmazonSageMakerFullAccess` and `sagemaker.amazonaws.com` as a Principal Service. +3. Enable relevant IAM permissions, including `AmazonSageMakerFullAccess`. -## Usage Steps -1. Install required integrations: - ```shell - zenml integration install aws s3 - ``` -2. Ensure Docker is installed and running. -3. Set up a remote artifact store and container registry. -4. Authenticate the orchestrator using one of three methods: - - **Service Connector**: - ```shell - zenml service-connector register --type aws -i - zenml orchestrator register --flavor=sagemaker --execution_role= - zenml orchestrator connect --connector - zenml stack register -o ... --set - ``` - - **Explicit Authentication**: - ```shell - zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... - zenml stack register -o ... --set - ``` - - **Implicit Authentication**: - ```shell - zenml orchestrator register --flavor=sagemaker --execution_role= - python run.py # Authenticates with `default` profile - ``` +## Installation +Install the necessary integrations: +```shell +zenml integration install aws s3 +``` +Ensure Docker is installed and running, and set up a remote artifact store and container registry. + +## Authentication Methods +### Service Connector (Recommended) +```shell +zenml service-connector register --type aws -i +zenml orchestrator register --flavor=sagemaker --execution_role= +zenml orchestrator connect --connector +zenml stack register -o ... --set +``` + +### Explicit Authentication +```shell +zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... +zenml stack register -o ... --set +``` + +### Implicit Authentication +```shell +zenml orchestrator register --flavor=sagemaker --execution_role= +python run.py # Uses `default` profile in `~/.aws/config` +``` ## Running Pipelines -Run any ZenML pipeline using: +Run any ZenML pipeline using the SageMaker orchestrator: ```shell python run.py ``` -Monitor pipeline runs through the ZenML dashboard and the Sagemaker UI. +Output will indicate the status of the pipeline run. + +## SageMaker UI +Access the SageMaker Pipelines UI via SageMaker Studio to view logs and details of pipeline runs. ## Debugging -If a pipeline fails before starting, check the Sagemaker UI for error messages and logs. Use Amazon CloudWatch for detailed logging. +If a pipeline fails before starting, check the SageMaker UI for error messages and logs. For detailed logs, use Amazon CloudWatch. + +## Scheduling +Currently, the SageMaker orchestrator does not support scheduled pipeline runs. ## Configuration -- **Pipeline and Step Level Configurations**: Use `SagemakerOrchestratorSettings` for instance types and other configurations. -- **Warm Pools**: Enable to reduce startup time: - ```python - sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) - ``` -- **S3 Data Access**: Configure S3 data import/export using `input_data_s3_uri` and `output_data_s3_uri`. +You can provide additional configuration at the pipeline or step level using `SagemakerOrchestratorSettings`. Example: +```python +sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(instance_type="ml.m5.large", volume_size_in_gb=30) +``` +Apply settings to a step: +```python +@step(settings={"orchestrator": sagemaker_orchestrator_settings}) +``` + +## Warm Pools +Enable Warm Pools to reduce startup time: +```python +sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) +``` +Disable Warm Pools: +```python +sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=None) +``` + +## S3 Data Access +### Import Data from S3 +```python +sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(input_data_s3_mode="File", input_data_s3_uri="s3://some-bucket-name/folder") +``` + +### Export Data to S3 +```python +sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(output_data_s3_mode="EndOfJob", output_data_s3_uri="s3://some-results-bucket-name/results") +``` ## Tagging -Add tags to pipeline executions and jobs for better resource management: +Add tags to pipeline executions and jobs: ```python -pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project"}) +pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project", "environment": "production"}) ``` -## GPU Configuration -For GPU usage, follow specific instructions to enable CUDA for acceleration. +## GPU Support +Follow specific instructions to enable CUDA for GPU-backed hardware when using the orchestrator. -This summary captures the essential technical details and key points for using the AWS Sagemaker Orchestrator with ZenML, ensuring clarity and conciseness without losing critical information. +For further details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/orchestrators/local.md === -### Local Orchestrator in ZenML +# Local Orchestrator in ZenML The local orchestrator is a built-in feature of ZenML that allows you to run pipelines locally without additional setup. -#### When to Use +### When to Use - Ideal for beginners starting with ZenML. -- Useful for quickly experimenting and debugging new pipelines. +- Suitable for quick experimentation and debugging of new pipelines. -#### Deployment -The local orchestrator is included with ZenML and requires no extra configuration. +### Deployment +The local orchestrator is included with ZenML and requires no extra installation. -#### Usage -To register and use the local orchestrator in your active stack, execute the following commands: +### Usage +To register and activate the local orchestrator in your stack, use the following commands: ```shell zenml orchestrator register --flavor=local zenml stack register -o ... --set ``` -Run any ZenML pipeline with: +You can run any ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` -For detailed attributes of the local orchestrator, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). +For detailed attributes and configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== @@ -8750,61 +8881,67 @@ For detailed attributes of the local orchestrator, refer to the [SDK Docs](https ### Kubernetes Orchestrator Overview -The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipelines on Kubernetes clusters without writing Kubernetes code. It is a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, running each pipeline step in separate Kubernetes pods. Unlike Kubeflow, which manages orchestration, ZenML uses a master pod for topological sorting of step execution, making it faster and simpler to set up. +The ZenML `kubernetes` integration allows orchestration and scaling of ML pipelines on Kubernetes clusters without needing Kubernetes code. It serves as a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, executing each pipeline step in separate Kubernetes pods managed by a master pod through topological sorting. This approach is faster and simpler than using Kubeflow, making it suitable for teams new to distributed orchestration. + +### When to Use -**Ideal Use Cases:** -- Lightweight pipeline execution on Kubernetes. -- Avoiding maintenance of Kubeflow Pipelines. -- Not opting for managed solutions like Vertex. +Use the Kubernetes orchestrator if you: +- Want a lightweight solution for running pipelines on Kubernetes. +- Prefer not to maintain Kubeflow Pipelines. +- Are not interested in managed solutions like Vertex. ### Deployment Requirements To deploy the Kubernetes orchestrator, you need: -- A Kubernetes cluster (remote/cloud or custom). +- A Kubernetes cluster (refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for deployment options). - A remote ZenML server connected to the cluster. -- ZenML `kubernetes` integration installed: - ```shell - zenml integration install kubernetes - ``` -- Docker and kubectl installed. -### Using the Kubernetes Orchestrator +### Usage Steps -1. **With Service Connector:** - - Register the orchestrator without needing local `kubectl`: +1. **Install the ZenML Kubernetes Integration:** ```shell - zenml orchestrator register --flavor kubernetes - zenml orchestrator connect --connector - zenml stack register -o ... --set + zenml integration install kubernetes ``` -2. **Without Service Connector:** - - Configure local `kubectl` and register the orchestrator: +2. **Ensure the following are installed:** + - Docker + - kubectl + - A remote artifact store and container registry as part of your stack. + +3. **Register the Orchestrator:** + - **With Service Connector:** + ```shell + zenml orchestrator register --flavor kubernetes + zenml orchestrator connect --connector + zenml stack register -o ... --set + ``` + + - **Without Service Connector:** + ```shell + zenml orchestrator register --flavor=kubernetes --kubernetes_context= + zenml stack register -o ... --set + ``` + +4. **Run a ZenML Pipeline:** ```shell - zenml orchestrator register --flavor=kubernetes --kubernetes_context= - zenml stack register -o ... --set + python file_that_runs_a_zenml_pipeline.py ``` -### Running a Pipeline +### Interacting with Pods -To run a ZenML pipeline: -```shell -python file_that_runs_a_zenml_pipeline.py -``` -You can view logs and check pod status with: +You can interact with Kubernetes pods using labels for debugging: ```shell -kubectl get pods -n zenml +kubectl delete pod -n zenml -l pipeline= ``` -### Pod Interaction and Configuration +### Additional Configuration -- Pods are labeled for easier management (e.g., by pipeline name). -- Default namespace is `zenml`, with a service account `zenml-service-account` created automatically. -- Additional settings can be configured, such as: - - `kubernetes_namespace`: Custom namespace. - - `service_account_name`: Existing service account for RBAC permissions. +- **Default Namespace:** The orchestrator uses the `zenml` namespace by default, creating a service account called `zenml-service-account`. +- **Custom Settings:** + - `kubernetes_namespace`: Specify an existing namespace. + - `service_account_name`: Use an existing service account with appropriate RBAC roles. -### Advanced Configuration +### Pod and Orchestrator Settings You can customize pod settings using `KubernetesOrchestratorSettings`: ```python @@ -8815,9 +8952,16 @@ kubernetes_settings = KubernetesOrchestratorSettings( "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, - "limits": {"cpu": "4", "memory": "8Gi"}, + "limits": {"cpu": "4", "memory": "8Gi"} }, - ... + "labels": {"app": "ml-pipeline"} + }, + orchestrator_pod_settings={ + "resources": { + "requests": {"cpu": "1", "memory": "2Gi"}, + "limits": {"cpu": "2", "memory": "4Gi"} + }, + "labels": {"app": "zenml-orchestrator"} }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" @@ -8826,7 +8970,7 @@ kubernetes_settings = KubernetesOrchestratorSettings( ### Step-Level Configuration -You can define settings on a per-step basis to override pipeline-level settings: +You can define settings at the step level to override pipeline settings: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: @@ -8835,9 +8979,9 @@ def train_model(data: dict) -> None: ### GPU Configuration -For GPU usage, ensure to follow specific instructions for enabling CUDA and customizing settings accordingly. +For GPU usage, follow specific instructions to enable CUDA for full acceleration. -For more details on configuration and usage, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). +For further details on settings and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). ================================================== @@ -8846,39 +8990,42 @@ For more details on configuration and usage, refer to the [SDK Docs](https://sdk # Orchestrators in ZenML ## Overview -The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps run only when all their required inputs are available. +The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps run only when all required inputs are available. ### Key Features - **Artifact Storage**: The orchestrator stores all artifacts produced by pipeline runs. -- **Configuration Requirement**: It must be configured in all ZenML stacks. - -### Orchestrator Flavors -ZenML provides several orchestrator flavors, including: - -| Orchestrator | Flavor | Integration | Notes | -|----------------------------------|-----------------|-------------------|--------------------------------------| -| [LocalOrchestrator](local.md) | `local` | _built-in_ | Runs pipelines locally. | -| [LocalDockerOrchestrator](local-docker.md) | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | -| [KubernetesOrchestrator](kubernetes.md) | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | -| [KubeflowOrchestrator](kubeflow.md) | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | -| [VertexOrchestrator](vertex.md) | `vertex` | `gcp` | Runs pipelines in Vertex AI. | -| [SagemakerOrchestrator](sagemaker.md) | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | -| [AzureMLOrchestrator](azureml.md) | `azureml` | `azure` | Runs pipelines in AzureML. | -| [TektonOrchestrator](tekton.md) | `tekton` | `tekton` | Runs pipelines using Tekton. | -| [AirflowOrchestrator](airflow.md) | `airflow` | `airflow` | Runs pipelines using Airflow. | -| [SkypilotAWSOrchestrator](skypilot-vm.md) | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | -| [SkypilotGCPOrchestrator](skypilot-vm.md) | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | +- **Docker Integration**: Many remote orchestrators build Docker images to execute pipeline code. + +## When to Use +The orchestrator is mandatory in ZenML stacks and must be configured for all pipelines. + +## Available Orchestrator Flavors +ZenML provides various orchestrators, including: + +| Orchestrator | Flavor | Integration | Notes | +|-------------------------------|-----------------|--------------|--------------------------------------------| +| [LocalOrchestrator](local.md) | `local` | _built-in_ | Runs pipelines locally. | +| [LocalDockerOrchestrator](local-docker.md) | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | +| [KubernetesOrchestrator](kubernetes.md) | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes clusters. | +| [KubeflowOrchestrator](kubeflow.md) | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | +| [VertexOrchestrator](vertex.md) | `vertex` | `gcp` | Runs pipelines in Vertex AI. | +| [SagemakerOrchestrator](sagemaker.md) | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | +| [AzureMLOrchestrator](azureml.md) | `azureml` | `azure` | Runs pipelines in AzureML. | +| [TektonOrchestrator](tekton.md) | `tekton` | `tekton` | Runs pipelines using Tekton. | +| [AirflowOrchestrator](airflow.md) | `airflow` | `airflow` | Runs pipelines using Airflow. | +| [SkypilotAWSOrchestrator](skypilot-vm.md) | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | +| [SkypilotGCPOrchestrator](skypilot-vm.md) | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | | [SkypilotAzureOrchestrator](skypilot-vm.md) | `vm_azure` | `skypilot[azure]` | Runs pipelines in Azure VMs using SkyPilot. | -| [HyperAIOrchestrator](hyperai.md) | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | -| [Custom Implementation](custom.md) | _custom_ | | Extend the orchestrator abstraction. | +| [HyperAIOrchestrator](hyperai.md) | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | +| [Custom Implementation](custom.md) | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell zenml orchestrator flavor list ``` -### Usage -You don't need to interact directly with the orchestrator in your code. Simply ensure the desired orchestrator is part of your active ZenML stack, and run your pipeline with: +## Usage +You do not need to interact directly with the orchestrator in your code. Simply ensure the orchestrator is part of your active ZenML stack and execute your pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` @@ -8892,8 +9039,8 @@ pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` -### Resource Specification -For steps requiring specific hardware, specify resources as detailed [here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). If unsupported, consider using [step operators](../step-operators/step-operators.md). +### Specifying Resources +Specify hardware requirements for pipeline steps as needed. For unsupported orchestrators, refer to [step operators](../step-operators/step-operators.md). ================================================== @@ -8901,25 +9048,18 @@ For steps requiring specific hardware, specify resources as detailed [here](../. ### Databricks Orchestrator Overview -**Databricks** is a unified data analytics platform that integrates data warehouses and lakes, optimizing big data processing and machine learning (ML). The **Databricks orchestrator** is a ZenML integration that allows running ML pipelines on Databricks, utilizing its distributed computing capabilities. - -#### When to Use -- If you're using Databricks for data and ML workloads. -- To leverage Databricks' distributed computing for ML pipelines. -- For a managed solution that integrates with Databricks services. -- To utilize Databricks' optimization for big data processing. +The Databricks orchestrator, part of the ZenML integration, allows users to run ML pipelines on Databricks, leveraging its distributed computing capabilities. It is suitable for users already utilizing Databricks for data and ML workloads and seeking a managed solution that integrates with Databricks services. -#### Prerequisites +### Prerequisites - An active Databricks workspace (AWS, Azure, GCP). - A Databricks account or service account with permissions to create and run jobs. -#### How It Works -1. **Wheel Packages**: ZenML creates a Python wheel package containing code and dependencies for your pipeline. -2. **Job Definition**: ZenML uses the Databricks SDK to create a job definition that specifies pipeline steps and their execution order. -3. **Execution**: The job retrieves the wheel package and runs it on a specified cluster configuration. -4. **Monitoring**: ZenML retrieves logs and job status for monitoring. +### How It Works +1. **Wheel Packages**: ZenML creates a Python wheel package containing the necessary code and dependencies for the pipeline. +2. **Job Definition**: ZenML uses the Databricks SDK to create a job definition that specifies pipeline steps and cluster settings (Spark version, number of workers, etc.). +3. **Execution**: The job retrieves the wheel package and executes the pipeline, ensuring steps run in the correct order. Logs and job status are retrieved post-execution. -#### Usage Steps +### Usage Steps 1. **Install Integration**: ```shell zenml integration install databricks @@ -8940,8 +9080,8 @@ For steps requiring specific hardware, specify resources as detailed [here](../. python run.py ``` -#### Databricks UI -Access pipeline run details and logs via the Databricks UI. Retrieve the UI URL in Python: +### Databricks UI +Access pipeline run details and logs via the Databricks UI. Retrieve the UI URL with: ```python from zenml.client import Client @@ -8949,7 +9089,7 @@ pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` -#### Scheduling Pipelines +### Scheduling Pipelines Use Databricks' native scheduling capability: ```python from zenml.config.schedule import Schedule @@ -8958,11 +9098,10 @@ pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` -- Only `cron_expression` is supported. -- Use Java Timezone IDs in the `cron_expression`. +**Note**: Only `cron_expression` is supported, and Java Timezone IDs must be used. -#### Additional Configuration -Customize the orchestrator using `DatabricksOrchestratorSettings`: +### Additional Configuration +Customize the orchestrator with `DatabricksOrchestratorSettings`: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings @@ -8974,15 +9113,15 @@ databricks_settings = DatabricksOrchestratorSettings( schedule_timezone="America/Los_Angeles" ) ``` -Specify settings at the pipeline or step level: +Apply settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": databricks_settings}) def my_pipeline(): ... ``` -#### GPU Support -To enable GPU support, change `spark_version` and `node_type_id`: +### GPU Support +To enable GPU support, adjust `spark_version` and `node_type_id`: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", @@ -8990,84 +9129,119 @@ databricks_settings = DatabricksOrchestratorSettings( autoscale=(1, 2), ) ``` -Follow additional instructions to enable CUDA for GPU acceleration. +**CUDA Configuration**: Follow specific instructions to enable CUDA for GPU acceleration. -### References -- For a full list of configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.orchestrators.databricks_orchestrator.DatabricksOrchestrator). +For further details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings) and [configuration documentation](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). ================================================== === File: docs/book/component-guide/orchestrators/skypilot-vm.md === -### SkyPilot VM Orchestrator Overview - -The **SkyPilot VM Orchestrator** is an integration by ZenML for provisioning and managing virtual machines (VMs) across supported cloud providers using the SkyPilot framework. It simplifies running machine learning workloads in the cloud, focusing on cost savings and high GPU availability without the complexities of cloud infrastructure management. +# SkyPilot VM Orchestrator Documentation Summary -**Important Note:** This component is intended for remote ZenML deployments only. Using it locally may cause unexpected behavior. +## Overview +The SkyPilot VM Orchestrator integrates with ZenML to provision and manage virtual machines (VMs) across supported cloud providers via the SkyPilot framework. It simplifies running machine learning workloads in the cloud, offering cost savings and high GPU availability without the complexities of managing cloud infrastructure. -### When to Use +**Note:** This component is intended for remote ZenML deployments only. +## Use Cases Use the SkyPilot VM Orchestrator if you: -- Want cost savings via spot VMs and automatic selection of the cheapest options. -- Require high GPU availability across various zones/regions/clouds. -- Prefer not to maintain Kubernetes solutions or pay for managed services like SageMaker. +- Want to leverage spot VMs for cost savings. +- Require high GPU availability across multiple zones/regions. +- Prefer not to maintain Kubernetes or pay for managed solutions. -### Functionality +## Functionality +- **Provisioning**: Automatically launches VMs for pipelines, supporting on-demand and managed spot VMs. +- **Optimization**: Selects the cheapest VM/zone/region for workloads. +- **Autostop**: Cleans up idle clusters to prevent unnecessary costs. -- **Provisioning and Scaling:** Automatically launches VMs for pipelines, supporting on-demand and managed spot VMs. -- **Optimizer:** Selects the cheapest VM/zone/region/cloud. -- **Autostop Feature:** Cleans up idle clusters to prevent unnecessary costs. +## Deployment Requirements +To deploy the SkyPilot VM Orchestrator: +- Ensure you have permissions to provision VMs on your chosen cloud provider. +- Configure the orchestrator using service connectors. -**Configuration:** You can specify VM types and resources for each pipeline step. For GPU support in Docker containers, configure `docker_run_args=["--gpus=all"]`. +**Supported Cloud Platforms**: AWS, GCP, Azure. -### Deployment +## Installation +Install the SkyPilot integration for your cloud provider: -No special steps are needed for deployment. Ensure you have permissions to provision VMs on your chosen cloud provider and configure the SkyPilot orchestrator using service connectors. +**AWS:** +```shell +pip install "zenml[connectors-aws]" +zenml integration install aws skypilot_aws +``` -**Supported Platforms:** AWS, GCP, Azure. +**GCP:** +```shell +pip install "zenml[connectors-gcp]" +zenml integration install gcp skypilot_gcp +``` -### Usage Steps +**Azure:** +```shell +pip install "zenml[connectors-azure]" +zenml integration install azure skypilot_azure +``` -1. **Install SkyPilot Integration:** - - **AWS:** - ```shell - pip install "zenml[connectors-aws]" - zenml integration install aws skypilot_aws - ``` - - **GCP:** - ```shell - pip install "zenml[connectors-gcp]" - zenml integration install gcp skypilot_gcp - ``` - - **Azure:** - ```shell - pip install "zenml[connectors-azure]" - zenml integration install azure skypilot_azure - ``` +## Configuration +### AWS Example +1. Register AWS Service Connector: + ```shell + zenml service-connector register aws-skypilot-vm --type aws --region=us-east-1 --auto-configure + ``` +2. Register the orchestrator: + ```shell + zenml orchestrator register --flavor vm_aws + zenml orchestrator connect --connector aws-skypilot-vm + ``` -2. **Configure Service Connector:** - - Follow specific instructions for AWS, GCP, Azure, or Lambda Labs to set up authentication. +### GCP Example +1. Register GCP Service Connector: + ```shell + zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure + ``` +2. Register the orchestrator: + ```shell + zenml orchestrator register --flavor vm_gcp + zenml orchestrator connect --connector gcp-skypilot-vm + ``` -3. **Register Orchestrator:** +### Azure Example +1. Register Azure Service Connector: ```shell - zenml orchestrator register --flavor - zenml orchestrator connect --connector - zenml stack register -o ... --set + zenml service-connector register azure-skypilot-vm -t azure --auth-method access-token --auto-configure + ``` +2. Register the orchestrator: + ```shell + zenml orchestrator register --flavor vm_azure + zenml orchestrator connect --connector azure-skypilot-vm ``` -### Additional Configuration +### Lambda Labs Example +1. Install integration: + ```shell + zenml integration install skypilot_lambda + ``` +2. Register the orchestrator with API key: + ```shell + zenml secret create lambda_api_key --scope user --api_key= + zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} + ``` -You can customize the orchestrator settings based on the cloud provider, such as: -- `instance_type` -- `cpus` -- `memory` -- `accelerators` -- `region` -- `zone` -- `disk_size` +### Kubernetes Example +1. Install integration: + ```shell + zenml integration install skypilot_kubernetes + ``` +2. Register the orchestrator: + ```shell + zenml orchestrator register --flavor sky_kubernetes + ``` -### Example Configuration for AWS +## Additional Configuration +Configure settings such as `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, `disk_size`, and `idle_minutes_to_autostop`. +### Example Configuration for AWS: ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings @@ -9079,34 +9253,32 @@ skypilot_settings = SkypilotAWSOrchestratorSettings( region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, - down=True, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) def my_pipeline(): + # Pipeline implementation pass ``` -### Configuring Step-Specific Resources +## Step-Specific Resources +You can configure resources for each step of your pipeline individually. If no specific settings are provided, the orchestrator defaults to the general settings. -The orchestrator allows configuring resources for each pipeline step. If no specific settings are provided, it defaults to the orchestrator's settings. To disable step-based settings, use: +### Disable Step-Based Settings: ```shell zenml orchestrator update --disable_step_based_settings=True ``` -**Example for Step-Specific Resources:** +### Example for Step-Specific Settings: ```python @step(settings={"orchestrator": high_resource_settings}) def my_resource_intensive_step(): + # Step implementation pass ``` -### Important Notes -- Certain features may not be supported across different cloud providers. -- For optimal performance and cost, tailor resources for each pipeline step as needed. - -For further details, refer to the [SkyPilot documentation](https://skypilot.readthedocs.io/en/latest/index.html) and ZenML's SDK documentation. +This orchestrator allows fine-grained control over resource allocation, optimizing for performance and cost. For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-skypilot/#zenml.integrations.skypilot.flavors.skypilot_orchestrator_base_vm_flavor.SkypilotBaseOrchestratorSettings). ================================================== @@ -9117,82 +9289,82 @@ For further details, refer to the [SkyPilot documentation](https://skypilot.read ## Overview AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle, from data preparation to monitoring. -## Use Cases -Use AzureML orchestrator if: -- You are using Azure. +## When to Use AzureML +Use the AzureML orchestrator if: +- You are already using Azure. - You need a production-grade orchestrator. - You want a UI to track pipeline runs. -- You prefer a managed solution for pipelines. +- You prefer a managed solution for running pipelines. ## Implementation -The ZenML AzureML orchestrator uses the AzureML Python SDK v2 to create AzureML `CommandComponent` for each ZenML step, assembling them into a pipeline. - -## Deployment -To deploy the AzureML orchestrator: -1. Deploy ZenML to the cloud, ideally in the same region as AzureML. -2. Ensure connection to the remote ZenML server. +The ZenML AzureML orchestrator utilizes the AzureML Python SDK v2 to create AzureML `CommandComponent` for each ZenML step, assembling them into a pipeline. -## Requirements -To use the AzureML orchestrator: -- Install ZenML Azure integration: - ```shell - zenml integration install azure - ``` -- Have Docker installed or a remote image builder. -- Include a remote artifact store and container registry in your stack. -- Set up an Azure resource group with an AzureML workspace. +## Deployment Requirements +1. Deploy ZenML to the cloud. +2. Ensure ZenML is connected to the remote server. +3. Install the ZenML `azure` integration: + ```shell + zenml integration install azure + ``` +4. Install Docker or set up a remote image builder. +5. Set up a remote artifact store and container registry. +6. Create an Azure resource group with an AzureML workspace. ### Authentication Methods -1. **Default Authentication**: Combines Azure hosting and local development credentials. -2. **Service Principal Authentication (recommended)**: Create a service principal on Azure, assign permissions, and register a ZenML Azure Service Connector: +1. **Default Authentication**: Simplifies authentication for local development and Azure hosting. +2. **Service Principal Authentication (recommended)**: Connects cloud components securely. Requires creating a service principal and registering a ZenML Azure Service Connector: ```bash zenml service-connector register --type azure -i zenml orchestrator connect -c ``` -## Docker Integration -ZenML builds a Docker image for each pipeline run, named `/zenml:`. +## Docker +ZenML builds a Docker image for each pipeline run at `/zenml:`, containing your code. ## AzureML UI -The AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. You can inspect steps and view execution logs. +AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. Double-click steps to view configurations and logs. -## Configuration Settings -The `AzureMLOrchestratorSettings` class configures compute resources with three modes: +## Settings +The `AzureMLOrchestratorSettings` class configures compute resources for pipeline execution. It supports three modes: -1. **Serverless Compute (Default)**: - ```python - azureml_settings = AzureMLOrchestratorSettings(mode="serverless") - ``` +### 1. Serverless Compute (Default) +```python +from zenml.integrations.azure.flavors import AzureMLOrchestratorSettings -2. **Compute Instance**: - ```python - azureml_settings = AzureMLOrchestratorSettings( - mode="compute-instance", - compute_name="my-gpu-instance", - size="Standard_NC6s_v3", - idle_time_before_shutdown_minutes=20, - ) - ``` +azureml_settings = AzureMLOrchestratorSettings(mode="serverless") +``` -3. **Compute Cluster**: - ```python - azureml_settings = AzureMLOrchestratorSettings( - mode="compute-cluster", - compute_name="my-gpu-cluster", - size="Standard_NC6s_v3", - tier="Dedicated", - min_instances=2, - max_instances=10, - idle_time_before_scaledown_down=60, - ) - ``` +### 2. Compute Instance +```python +azureml_settings = AzureMLOrchestratorSettings( + mode="compute-instance", + compute_name="my-gpu-instance", + size="Standard_NC6s_v3", + idle_time_before_shutdown_minutes=20, +) +``` + +### 3. Compute Cluster +```python +azureml_settings = AzureMLOrchestratorSettings( + mode="compute-cluster", + compute_name="my-gpu-cluster", + size="Standard_NC6s_v3", + tier="Dedicated", + min_instances=2, + max_instances=10, + idle_time_before_scaledown_down=60, +) +``` ## Scheduling Pipelines -AzureML orchestrator supports scheduled pipeline runs using `JobSchedules` with cron expressions or intervals: +AzureML orchestrator supports scheduling pipelines using cron expressions or intervals: ```python +from zenml.config.schedule import Schedule + pipeline.run(schedule=Schedule(cron_expression="*/5 * * * *")) ``` -Note: ZenML only initiates the schedule; users must manage it via the Azure UI. +Note: Users must manage the lifecycle of schedules via the Azure UI. For more details on compute sizes, refer to the [AzureML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes). @@ -9203,77 +9375,74 @@ For more details on compute sizes, refer to the [AzureML documentation](https:// # Tekton Orchestrator Documentation Summary ## Overview -Tekton is an open-source framework for CI/CD, enabling developers to build, test, and deploy applications across various environments. This component is designed for remote ZenML deployments only. +Tekton is an open-source framework for CI/CD systems that enables developers to build, test, and deploy applications across various environments. The Tekton orchestrator in ZenML is designed for remote deployments and is not recommended for local setups. ## When to Use Tekton -- Proven production-grade orchestrator. -- UI for tracking pipeline runs. -- Familiarity with Kubernetes or willingness to set it up. -- Ability to deploy and maintain Tekton Pipelines. +Use the Tekton orchestrator if: +- You need a production-grade orchestrator. +- You require a UI to track pipeline runs. +- You are comfortable with Kubernetes setup and maintenance. +- You can deploy and maintain Tekton Pipelines. ## Deployment Steps -1. **Set Up Kubernetes Cluster**: - - **AWS**: - - Use an EKS cluster. - - Configure `kubectl`: - ```powershell - aws eks --region REGION update-kubeconfig --name CLUSTER_NAME - ``` - - Install Tekton Pipelines. - - **GCP**: - - Use a GKE cluster. - - Configure `kubectl`: - ```powershell - gcloud container clusters get-credentials CLUSTER_NAME - ``` - - Install Tekton Pipelines. - - **Azure**: - - Use an AKS cluster. - - Configure `kubectl`: - ```powershell - az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME - ``` - - Install Tekton Pipelines. +1. **Set Up Kubernetes Cluster**: Ensure you have a remote ZenML server and a Kubernetes cluster (EKS, GKE, or AKS) set up. +2. **Install `kubectl`**: Download and configure `kubectl` for your cluster. +3. **Install Tekton Pipelines**: Follow the installation guide for Tekton Pipelines. + +**Example Commands**: +- For AWS EKS: + ```powershell + aws eks --region REGION update-kubeconfig --name CLUSTER_NAME + ``` +- For GCP GKE: + ```powershell + gcloud container clusters get-credentials CLUSTER_NAME + ``` +- For Azure AKS: + ```powershell + az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME + ``` -**Note**: Ensure Tekton Pipelines version >=0.38.3 is used. +**Note**: Ensure Tekton Pipelines version is >=0.38.3. ## Usage Requirements -- Install ZenML `tekton` integration: - ```shell - zenml integration install tekton -y - ``` -- Docker installed and running. -- Remote artifact store and container registry configured. -- Optional: `kubectl` installed for context management. +To use the Tekton orchestrator: +- Install the ZenML `tekton` integration: + ```shell + zenml integration install tekton -y + ``` +- Ensure Docker is installed and running. +- Have a remote artifact store and container registry as part of your stack. +- Optionally, configure `kubectl` for remote access. -## Registering the Orchestrator +### Registering the Orchestrator 1. **With Service Connector**: - ```shell - zenml orchestrator register --flavor tekton - zenml orchestrator connect --connector - zenml stack register -o ... --set - ``` + ```shell + zenml orchestrator register --flavor tekton + zenml orchestrator connect --connector + zenml stack register -o ... --set + ``` 2. **Without Service Connector**: - ```shell - zenml orchestrator register --flavor=tekton --kubernetes_context= - zenml stack register -o ... --set - ``` + ```shell + zenml orchestrator register --flavor=tekton --kubernetes_context= + zenml stack register -o ... --set + ``` ## Running a Pipeline -Execute a ZenML pipeline: +To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` ## Tekton UI -Access the Tekton UI for pipeline run details: +Access the Tekton UI for detailed pipeline run information: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` ## Additional Configuration -Configure `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: +You can customize the Tekton orchestrator using `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings @@ -9285,7 +9454,7 @@ tekton_settings = TektonOrchestratorSettings( ) ``` -Specify hardware requirements using `ResourceSettings`: +Specify resource settings for hardware requirements: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` @@ -9302,9 +9471,9 @@ def my_step(): ``` ## GPU Configuration -For GPU usage, follow specific instructions to enable CUDA for acceleration. +For running steps on GPU, follow the instructions to enable CUDA for acceleration. -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/#zenml.integrations.tekton.orchestrators.tekton_orchestrator.TektonOrchestrator). +For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/#zenml.integrations.tekton.orchestrators.tekton_orchestrator.TektonOrchestrator). ================================================== @@ -9312,61 +9481,63 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr ### Kubeflow Orchestrator Overview -The Kubeflow orchestrator is a ZenML integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) for running pipelines. It is designed for remote ZenML deployments and is not suitable for local setups. +The Kubeflow orchestrator is a ZenML integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to manage and run pipelines. It is designed for remote ZenML deployments and is not suitable for local setups. ### When to Use Use the Kubeflow orchestrator if you need: -- A production-grade orchestrator. -- A UI for tracking pipeline runs. -- Familiarity with Kubernetes or willingness to set it up. +- A production-grade orchestrator with a UI for tracking pipeline runs. +- Familiarity with Kubernetes or willingness to set up a Kubernetes cluster. - Capability to deploy and maintain Kubeflow Pipelines. ### Deployment Steps -To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. Here’s a brief guide for various cloud providers: +To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. The setup varies by cloud provider: #### AWS -1. Set up an EKS cluster. -2. Install AWS CLI and configure `kubectl`: +1. Set up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). +2. Configure AWS CLI and `kubectl`: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. +4. Optionally, set up an AWS Service Connector for secure access. #### GCP -1. Set up a GKE cluster. -2. Install Google Cloud CLI and configure `kubectl`: +1. Set up a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/quickstart). +2. Configure Google Cloud CLI and `kubectl`: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. +4. Optionally, set up a GCP Service Connector. #### Azure -1. Set up an AKS cluster. -2. Install Azure CLI and configure `kubectl`: +1. Set up an [AKS cluster](https://azure.microsoft.com/en-in/services/kubernetes-service/#documentation). +2. Configure Azure CLI and `kubectl`: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. -4. Adjust `containerRuntimeExecutor` in the workflow controller's ConfigMap if necessary. +4. Adjust the workflow controller's `containerRuntimeExecutor` to `k8sapi` if using containerd. #### Other Kubernetes 1. Set up a Kubernetes cluster. 2. Install `kubectl` and configure it. 3. Install Kubeflow Pipelines. +4. Optionally, set up a Kubernetes Service Connector. ### Usage Requirements To use the Kubeflow orchestrator: -- A Kubernetes cluster with Kubeflow Pipelines. +- A Kubernetes cluster with Kubeflow Pipelines installed. - A remote ZenML server. -- Install the ZenML `kubeflow` integration: +- ZenML `kubeflow` integration installed: ```shell zenml integration install kubeflow ``` - Docker installed (unless using a remote Image Builder). -- Optionally, `kubectl` installed. +- `kubectl` installed (optional). ### Registering the Orchestrator @@ -9385,14 +9556,14 @@ To use the Kubeflow orchestrator: ### Running a Pipeline -To run a ZenML pipeline: +Run a ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` -### Kubeflow UI +### Accessing Kubeflow UI -Access the Kubeflow UI to view pipeline run details: +Retrieve the Kubeflow UI URL for pipeline runs: ```python from zenml.client import Client @@ -9404,27 +9575,28 @@ orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] You can configure the Kubeflow orchestrator with `KubeflowOrchestratorSettings` for: - `client_args`: KFP client arguments. -- `user_namespace`: Namespace for experiments. -- `pod_settings`: Node selectors and tolerations. +- `user_namespace`: Namespace for experiments and runs. +- `pod_settings`: Node selectors, affinity, and tolerations. -Example: +Example configuration: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( + client_args={}, user_namespace="my_namespace", pod_settings={"affinity": {...}, "tolerations": [...]} ) ``` -### Multi-Tenancy Considerations +### Multi-Tenancy Note -For multi-tenant deployments, include `kubeflow_hostname` when registering: +For multi-tenant deployments, include the `kubeflow_hostname` parameter when registering: ```shell zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` -Set the namespace and authentication credentials: +Use the following for authentication: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="{{kubeflow_secret.username}}", @@ -9442,84 +9614,85 @@ zenml secret create kubeflow_secret --username=admin --password=abc123 ### Conclusion -For detailed configuration and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/vertex.md === -# Google Cloud Vertex AI Orchestrator Summary +# Google Cloud Vertex AI Orchestrator Documentation Summary ## Overview -Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) designed for running production-ready, repeatable cloud orchestrators with minimal setup. It is intended for use in remote ZenML deployment scenarios. +Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) for running production-ready, repeatable pipelines with minimal setup. This orchestrator is intended for remote ZenML deployments only. ## When to Use Use the Vertex orchestrator if: - You are using GCP. -- You need a production-grade orchestrator with a UI for tracking pipeline runs. -- You prefer a managed, serverless solution for running pipelines. +- You need a production-grade orchestrator with UI tracking. +- You prefer a managed, serverless solution. -## Deployment Requirements -1. Deploy ZenML to the cloud, ideally in the same GCP project as Vertex infrastructure. -2. Ensure connection to the remote ZenML server. -3. Enable Vertex-related APIs on your GCP project. +## Deployment Steps +1. **Deploy ZenML to the Cloud**: Recommended to deploy in the same GCP project as Vertex infrastructure. +2. **Enable Vertex APIs**: Ensure relevant APIs are enabled in your GCP project. -## Usage Requirements -- Install ZenML `gcp` integration: - ```shell - zenml integration install gcp - ``` -- Install and run Docker. -- Set up a remote artifact store and container registry. -- Obtain GCP credentials with appropriate permissions. +## Prerequisites +- Install ZenML GCP integration: + ```shell + zenml integration install gcp + ``` +- Docker installed and running. +- Remote artifact store and container registry configured. +- GCP credentials with necessary permissions. ### GCP Credentials and Permissions -You need a GCP user account or service accounts with permissions for: -- Creating jobs in Vertex Pipelines (e.g., `Vertex AI User` role). -- Running Vertex AI pipelines (e.g., `Vertex AI Service Agent` role). -- Writing to the artifact store (e.g., `Storage Object Creator Role`). +You need a GCP user account or service accounts with proper permissions. Authentication options include: +- Using `gcloud` CLI. +- Service account key file. +- Recommended: GCP Service Connector with linked credentials. + +### Vertex AI Pipeline Components +1. **ZenML Client Environment**: Runs ZenML code, requires permissions to create jobs in Vertex Pipelines. +2. **Vertex AI Pipeline Environment**: Runs pipeline steps, requires a workload service account with permissions to execute pipelines. ### Configuration Use-Cases -1. **Local `gcloud` CLI with User Account**: - ```shell - zenml orchestrator register \ - --flavor=vertex \ - --project= \ - --location= \ - --synchronous=true - ``` +1. **Local `gcloud` CLI**: + ```shell + zenml orchestrator register \ + --flavor=vertex \ + --project= \ + --location= \ + --synchronous=true + ``` 2. **GCP Service Connector with Single Service Account**: - ```shell - zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic - - zenml orchestrator register \ - --flavor=vertex \ - --location= \ - --synchronous=true \ - --workload_service_account=@.iam.gserviceaccount.com + ```shell + zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic - zenml orchestrator connect --connector - ``` + zenml orchestrator register \ + --flavor=vertex \ + --location= \ + --synchronous=true \ + --workload_service_account=@.iam.gserviceaccount.com + + zenml orchestrator connect --connector + ``` -3. **GCP Service Connector with Different Service Accounts**: - - Requires multiple service accounts with specific permissions. - - Register the service connector and orchestrator similarly as above. +3. **GCP Service Connector with Different Service Accounts**: Involves multiple service accounts for least privilege access. ### Configuring the Stack -To register and activate a stack with the orchestrator: +To register and activate a stack with the new orchestrator: ```shell zenml stack register -o ... --set ``` ### Running Pipelines -Run a ZenML pipeline using: +Run any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Vertex UI -Access pipeline run details via the Vertex UI. Get the URL programmatically: +Access pipeline run details and logs via the Vertex UI. Retrieve the URL in Python: ```python from zenml.client import Client @@ -9528,7 +9701,7 @@ orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Scheduling Pipelines -Use native scheduling capabilities: +Schedule pipelines using: ```python from zenml.config.schedule import Schedule @@ -9536,6 +9709,7 @@ pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` +**Note**: Only `cron_expression`, `start_time`, and `end_time` are supported. ### Additional Configuration Configure labels and resource settings: @@ -9545,7 +9719,6 @@ from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrch vertex_settings = VertexOrchestratorSettings(labels={"key": "value"}) resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` - For GPU usage: ```python vertex_settings = VertexOrchestratorSettings( @@ -9557,65 +9730,39 @@ resource_settings = ResourceSettings(gpu_count=1) ### Enabling CUDA for GPU Follow specific instructions to enable CUDA for GPU acceleration. -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.flavors.vertex_orchestrator_flavor.VertexOrchestratorSettings). +For further details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.flavors.vertex_orchestrator_flavor.VertexOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === -### Developing a Custom Orchestrator in ZenML - -#### Overview -To create a custom orchestrator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). - -#### Base Implementation -ZenML's `BaseOrchestrator` abstracts ZenML-specific details, providing a simplified interface for orchestration tools. - -```python -from abc import ABC, abstractmethod -from typing import Any, Dict, Type -from zenml.models import PipelineDeploymentResponseModel -from zenml.enums import StackComponentType -from zenml.stack import StackComponent, StackComponentConfig, Stack, Flavor - -class BaseOrchestratorConfig(StackComponentConfig): - """Base class for all ZenML orchestrator configurations.""" - -class BaseOrchestrator(StackComponent, ABC): - @abstractmethod - def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> Any: - """Prepares and runs the pipeline or returns an intermediate representation.""" - - @abstractmethod - def get_orchestrator_run_id(self) -> str: - """Returns a unique run ID for the active orchestrator run.""" - -class BaseOrchestratorFlavor(Flavor): - @property - @abstractmethod - def name(self): - """Returns the name of the flavor.""" - - @property - def type(self) -> StackComponentType: - return StackComponentType.ORCHESTRATOR - - @property - def config_class(self) -> Type[BaseOrchestratorConfig]: - return BaseOrchestratorConfig - - @property - @abstractmethod - def implementation_class(self) -> Type["BaseOrchestrator"]: - """Implementation class for this flavor.""" -``` - -#### Creating a Custom Orchestrator -1. **Inherit from `BaseOrchestrator`:** Implement `prepare_or_run_pipeline(...)` and `get_orchestrator_run_id()`. -2. **Configuration Class:** Inherit from `BaseOrchestratorConfig` for custom parameters. -3. **Flavor Class:** Inherit from `BaseOrchestratorFlavor` and define the flavor's name. +# Custom Orchestrator Development in ZenML -To register the orchestrator flavor, use: +## Overview +This documentation provides guidance on developing a custom orchestrator in ZenML, an orchestration framework. Familiarity with ZenML's component flavor concepts is recommended before proceeding. + +## Base Implementation +ZenML allows for orchestration with various tools through the `BaseOrchestrator`, which abstracts ZenML-specific details and provides a simplified interface. + +### Key Classes +- **BaseOrchestratorConfig**: Base class for all orchestrator configurations. +- **BaseOrchestrator**: Abstract class requiring implementation of: + - `prepare_or_run_pipeline(deployment, stack, environment)`: Prepares and runs the pipeline. + - `get_orchestrator_run_id()`: Returns a unique run ID for the active orchestrator run. + +- **BaseOrchestratorFlavor**: Base class for orchestrator flavors, requiring: + - `name`: Flavor name. + - `type`: Returns `StackComponentType.ORCHESTRATOR`. + - `config_class`: Returns `BaseOrchestratorConfig`. + - `implementation_class`: Implementation class for the flavor. + +## Creating a Custom Orchestrator +1. **Inherit from `BaseOrchestrator`** and implement the required methods. +2. **Create a configuration class** inheriting from `BaseOrchestratorConfig` for custom parameters. +3. **Inherit from `BaseOrchestratorFlavor`**, providing a name for the flavor. + +### Registering the Flavor +Use the CLI to register your orchestrator flavor: ```shell zenml orchestrator flavor register ``` @@ -9624,22 +9771,23 @@ Example: zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` -**Note:** Initialize ZenML at the root of your repository for proper flavor resolution. +### Listing Available Flavors +To see registered flavors: +```shell +zenml orchestrator flavor list +``` -#### Implementation Guide -1. **Create Orchestrator Class:** Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. -2. **Implement Methods:** - - `prepare_or_run_pipeline(...)`: Convert the pipeline for your orchestration tool and run it. - - `get_orchestrator_run_id()`: Return a unique ID for each pipeline run. +## Implementation Guide +1. **Create your orchestrator class**: Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. +2. **Implement `prepare_or_run_pipeline(...)`**: Convert the pipeline for your orchestration tool and run it, ensuring correct execution order and environment variables. +3. **Implement `get_orchestrator_run_id()`**: Return a unique ID for each pipeline run. -#### Optional Features -- **Scheduling:** Handle `deployment.schedule` if supported. -- **Resource Specification:** Manage resources like CPUs/GPUs from `step.config.resource_settings`. +### Optional Features +- **Scheduling**: Handle `deployment.schedule` if supported. +- **Resource Specification**: Manage CPU, GPU, or memory settings via `step.config.resource_settings`. -#### Code Sample +### Code Sample ```python -from typing import Dict -from zenml.entrypoints import StepEntrypointConfiguration from zenml.models import PipelineDeploymentResponseModel from zenml.orchestrators import ContainerizedOrchestrator from zenml.stack import Stack @@ -9658,111 +9806,107 @@ class MyOrchestrator(ContainerizedOrchestrator): ... ``` -#### Enabling CUDA for GPU -For GPU support, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for acceleration. +## Enabling GPU Support +To run steps on a GPU, follow the specific instructions to enable CUDA for GPU acceleration. + +For more details and examples, refer to the full documentation and source code on GitHub. ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === -# Debugging Guide for ZenML +# ZenML Debugging Guide -This guide provides best practices for debugging common issues in ZenML and obtaining assistance. +This guide provides best practices for debugging common issues in ZenML, including when to seek help and how to effectively communicate your problem. -## When to Seek Help -Before asking for help, follow this checklist: -- Search Slack, GitHub issues, and the ZenML documentation. +### When to Get Help +Before asking for help, check the following: +- Search Slack, GitHub issues, and ZenML documentation. - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). -If unresolved, post your question on [Slack](https://zenml.io/slack). +If you still need assistance, post your question on [Slack](https://zenml.io/slack). -## How to Post on Slack -Provide the following information for effective troubleshooting: +### How to Post on Slack +Provide the following information for clarity: -### 1. System Information -Run the command to gather system info: -```shell -zenml info -a -s -``` -For specific package issues, use: -```shell -zenml info -p -``` +1. **System Information**: Run the command below and share the output: + ```shell + zenml info -a -s + ``` + For specific package issues, use: + ```shell + zenml info -p + ``` -### 2. Describe the Issue -- What were you trying to achieve? -- What did you expect vs. what actually happened? +2. **What Happened**: Describe your goal, expectations, and actual results. -### 3. Steps to Reproduce -Outline the steps to reproduce the error, either in text or video format. +3. **Reproducing the Error**: Provide step-by-step instructions or a video. -### 4. Relevant Log Output -Attach relevant logs and error tracebacks. Include outputs from: -```shell -zenml status -zenml stack describe -``` -For orchestrator logs, provide relevant pod logs if applicable. +4. **Relevant Log Output**: Attach relevant logs and error tracebacks. Include outputs from: + ```shell + zenml status + zenml stack describe + ``` + For orchestrator logs, include those from the relevant pod. -#### 4.1 Additional Logs -If default logs are insufficient, adjust logging verbosity: +### Additional Logs +If default logs are insufficient, increase verbosity by setting the environment variable: ```shell export ZENML_LOGGING_VERBOSITY=DEBUG ``` -Refer to documentation for setting environment variables on your OS. +Refer to documentation for setting environment variables on different OS. -## Client and Server Logs -To view server logs: +### Client and Server Logs +To view server logs, run: ```shell zenml logs ``` -## Common Errors -### Error Initializing REST Store -Occurs when the local ZenML server is not running after a restart: -```bash -RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237'... -``` -Run `zenml login --local` after each restart. +### Common Errors +1. **Error initializing rest store**: + ```bash + RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237': ... + ``` + Solution: Re-run `zenml login --local` after a machine restart. -### Column 'step_configuration' Cannot Be Null -This error indicates a configuration string is too long: -```bash -sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") -``` +2. **Column 'step_configuration' cannot be null**: + ```bash + sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") + ``` + Solution: Ensure step configurations are within the character limit. -### 'NoneType' Object Has No Attribute 'Name' -This error occurs when a required stack component is missing: -```shell -AttributeError: 'NoneType' object has no attribute 'name' -``` -To resolve, register the necessary component: -```shell -zenml experiment-tracker register mlflow_tracker --flavor=mlflow -zenml stack update -e mlflow_tracker -``` +3. **'NoneType' object has no attribute 'name'**: + ```shell + AttributeError: 'NoneType' object has no attribute 'name' + ``` + Solution: Register the required stack components, e.g.: + ```shell + zenml experiment-tracker register mlflow_tracker --flavor=mlflow + zenml stack update -e mlflow_tracker + ``` -This guide aims to streamline the debugging process for ZenML users, ensuring efficient resolution of issues. +This guide aims to streamline the debugging process and enhance communication for effective problem resolution in ZenML. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === -### Summary of ZenML Secrets Documentation +# ZenML Secrets Management Documentation Summary -#### What is a ZenML Secret? -ZenML secrets are collections of **key-value pairs** securely stored in the ZenML secrets store, identified by a **name** for easy retrieval in pipelines and stacks. +## Overview of ZenML Secrets +ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. -#### Creating a Secret -**CLI Method:** +## Creating Secrets + +### CLI Method To create a secret named `` with key-value pairs: ```shell zenml secret create --= --= ``` Alternatively, use JSON or YAML format: ```shell -zenml secret create --values='{"key1":"value2","key2":"value2"}' +zenml secret create --values='{"key1":"value1","key2":"value2"}' ``` For interactive creation: ```shell @@ -9773,7 +9917,8 @@ For large values or special characters, read from a file: zenml secret create --key=@path/to/file.txt ``` -**Python SDK Method:** +### Python SDK Method +Using the ZenML client API: ```python from zenml.client import Client @@ -9781,36 +9926,27 @@ client = Client() client.create_secret(name="my_secret", values={"username": "admin", "password": "abc123"}) ``` -#### Secret Management Commands -Use CLI commands to list, update, and delete secrets. For interactive registration of missing secrets in a stack: -```shell -zenml stack register-secrets [] -``` - -#### Scoping Secrets -Secrets can be scoped to a user, ensuring access control: +## Secret Scope +Secrets can be scoped to a user, ensuring only the active user can access them: ```shell zenml secret create --scope user --= ``` -#### Accessing Registered Secrets -To reference secrets in stack components, use: -```shell -{{.}} -``` -Example: +## Accessing Secrets +### Reference in Stack Components +Use the syntax `{{.}}` to reference secrets in stack component attributes: ```shell zenml secret create mlflow_secret --username=admin --password=abc123 zenml experiment-tracker register mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` -#### Secret Validation Levels -Control secret validation with the environment variable `ZENML_SECRET_VALIDATION_LEVEL`: -- `NONE`: Disables validation. -- `SECRET_EXISTS`: Validates existence of secrets only. -- `SECRET_AND_KEY_EXISTS`: (default) Validates both existence of secrets and keys. +### Validation of Secrets +ZenML validates the existence of secrets and keys before running a pipeline. Control validation level with `ZENML_SECRET_VALIDATION_LEVEL`: +- `NONE`: No validation. +- `SECRET_EXISTS`: Checks if the secret exists. +- `SECRET_AND_KEY_EXISTS`: (default) Checks both secret and key existence. -#### Fetching Secret Values in Steps +### Fetching Secret Values in Steps Access secrets in steps using the ZenML `Client` API: ```python from zenml import step @@ -9818,14 +9954,12 @@ from zenml.client import Client @step def secret_loader() -> None: - secret = Client().get_secret() - authenticate_to_some_api( - username=secret.secret_values["username"], - password=secret.secret_values["password"], - ) + secret = Client().get_secret("") + authenticate_to_some_api(username=secret.secret_values["username"], password=secret.secret_values["password"]) ``` -This documentation provides essential commands and methods for managing secrets in ZenML, ensuring secure handling of sensitive information in machine learning workflows. +## Additional Resources +For more details, refer to the full CLI guide [here](https://sdkdocs.zenml.io/latest/cli/#zenml.cli--secrets-management) and the Client API reference [here](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/). ================================================== @@ -9835,23 +9969,24 @@ This documentation provides essential commands and methods for managing secrets This section outlines the essential steps for setting up and managing ZenML projects. -## Key Steps for Project Setup: +## Key Steps for Project Setup + 1. **Installation**: Install ZenML using pip: ```bash pip install zenml ``` -2. **Initialize a Project**: Create a new ZenML project: +2. **Initialize a Project**: Create a new ZenML project with: ```bash zenml init ``` -3. **Configure Stack**: Set up a stack by selecting components (e.g., orchestrators, artifact stores): +3. **Configure a Stack**: Set up a stack that includes components like orchestrators, artifact stores, and metadata stores. Use: ```bash - zenml stack register my_stack --orchestrator= --artifact-store= + zenml stack register --orchestrator --artifact-store --metadata-store ``` -4. **Create Pipelines**: Define pipelines using decorators: +4. **Create Pipelines**: Define pipelines using decorators and functions. Example: ```python @pipeline def my_pipeline(): @@ -9859,22 +9994,18 @@ This section outlines the essential steps for setting up and managing ZenML proj step2 = step2_op(step1) ``` -5. **Run Pipelines**: Execute the pipeline: +5. **Run Pipelines**: Execute pipelines with: ```bash - zenml pipeline run my_pipeline + zenml pipeline run ``` -## Project Management: -- **Version Control**: Use Git for versioning your ZenML projects. -- **Environment Management**: Utilize virtual environments to manage dependencies. -- **Documentation**: Maintain clear documentation for project structure and components. +## Project Management -## Best Practices: -- Regularly update dependencies. -- Use consistent naming conventions for pipelines and stacks. -- Monitor and log pipeline executions for troubleshooting. +- **Version Control**: Use Git for version control to manage changes in your project. +- **Environment Management**: Utilize virtual environments to isolate dependencies. +- **Documentation**: Maintain clear documentation for project structure and components. -This summary provides a concise overview of the project setup and management processes within ZenML, ensuring critical information is retained for effective understanding and implementation. +By following these steps, users can effectively set up and manage ZenML projects, ensuring a streamlined workflow for machine learning operations. ================================================== @@ -9882,124 +10013,105 @@ This summary provides a concise overview of the project setup and management pro # Access Management and Roles in ZenML -## Overview -This guide outlines user roles and access management in ZenML, emphasizing security and efficiency. +This guide outlines user roles and access management in ZenML, essential for project security and efficiency. ## Typical Roles in an ML Project +Common roles include: - **Data Scientists**: Develop and run pipelines. - **MLOps Platform Engineers**: Manage infrastructure and stack components. - **Project Owners**: Oversee ZenML deployment and user access. -Roles may vary in your team but can be aligned with these responsibilities. - -### Role Creation -You can create roles in ZenML Pro with specific permissions and assign them to users or teams. [Sign up for a free trial](https://cloud.zenml.io/). +Roles may vary, but responsibilities can be adapted to your project. ## Service Connectors -Service connectors integrate external cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors, while other team members can use them without accessing sensitive credentials. +Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors, while Data Scientists can use them to create stack components without accessing sensitive credentials. ### Example Permissions -- **Data Scientist Role**: Can use connectors to create stack components and run pipelines but cannot create, update, or delete connectors or access their credentials. -- **MLOps Platform Engineer Role**: Has full permissions to manage connectors and access secret values. +- **Data Scientist Role**: Can create stack components and run pipelines but cannot create, update, or delete connectors or read secret values. +- **MLOps Platform Engineer Role**: Has permissions to create, update, delete connectors, and read secret values. -### Note -RBAC features are available in ZenML Pro. Learn more about roles [here](../../../getting-started/zenml-pro/roles.md). +RBAC features are available in ZenML Pro. -## Server Upgrade Responsibility -Project Owners decide on server upgrades, considering team requirements. MLOps Platform Engineers are responsible for executing upgrades, ensuring data backup, and minimizing service disruption. For multi-team environments, ZenML Pro supports [multi-tenancy](../../../getting-started/zenml-pro/tenants.md). +## Upgrading the ZenML Server +Project Owners decide on server upgrades after consulting teams. MLOps Platform Engineers typically handle the upgrade process, ensuring data backup and minimal service disruption. -## Pipeline Migration and Maintenance -Data Scientists own pipeline code, while Platform Engineers ensure compatibility with new ZenML versions. Both should review release notes and migration guides during upgrades. +## Migrating and Maintaining Pipelines +Data Scientists own pipeline code but must collaborate with Platform Engineers to test compatibility with new ZenML versions. They should review release notes and migration guides during upgrades. ## Best Practices for Access Management - **Regular Audits**: Review user access and permissions periodically. - **Role-Based Access Control (RBAC)**: Streamline permission management. -- **Least Privilege**: Grant minimal necessary permissions. -- **Documentation**: Maintain clear records of roles and access policies. - -### Note -RBAC and permission management are features of ZenML Pro. +- **Least Privilege**: Assign minimal necessary permissions. +- **Documentation**: Keep clear records of roles and access policies. -By adhering to these guidelines, you can maintain a secure and collaborative ZenML environment. +RBAC and permission assignment are exclusive to ZenML Pro users. Following these practices ensures a secure and collaborative ZenML environment. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md === -# Shared Libraries and Logic for Teams +# Sharing Code and Libraries within Teams ## Overview -This guide outlines how teams can share code and libraries using ZenML to enhance collaboration, standardization, and robustness across projects. +This guide outlines how teams can share code libraries and components using ZenML to enhance collaboration, standardization, and robustness across projects. ## What Can Be Shared -ZenML supports sharing various custom components: - -### Custom Flavors -- Create a custom flavor in a shared repository. -- Implement it as per ZenML documentation. -- Register using the ZenML CLI: - ```bash - zenml artifact-store flavor register - ``` +### Custom Components +1. **Custom Flavors**: Integrations not built-in with ZenML. + - Create in a shared repository. + - Implement as per ZenML documentation. + - Register using ZenML CLI: + ```bash + zenml artifact-store flavor register + ``` -### Custom Steps -- Create and share custom steps via a separate repository, referenced like standard Python modules. +2. **Custom Steps**: Created in a separate repository and referenced like Python modules. -### Custom Materializers -- Develop a custom materializer in a shared repository. -- Implement as described in ZenML documentation, allowing team members to import and use it. +3. **Custom Materializers**: Common components for sharing. + - Create in a shared repository. + - Implement as per ZenML documentation. + - Import and use in projects. ## How to Distribute Shared Components - ### Shared Private Wheels -A method for internal distribution of Python code: - **Benefits**: Easy installation, version and dependency management, privacy. - **Setup**: 1. Create a private PyPI server (e.g., AWS CodeArtifact). 2. Build code into wheel format. - 3. Upload to the server. + 3. Upload to the private server. 4. Configure pip to use the private server. 5. Install packages using pip. ### Using Shared Libraries with `DockerSettings` -To include shared libraries in Docker images: -- Specify requirements: - ```python - import os - from zenml.config import DockerSettings - from zenml import pipeline - - docker_settings = DockerSettings( - requirements=["my-simple-package==0.1.0"], - environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} - ) - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` -- Alternatively, use a requirements file: - ```python - docker_settings = DockerSettings(requirements="/path/to/requirements.txt") +- Specify shared libraries in the `Dockerfile` at runtime. +- **Installation Methods**: + - List of requirements: + ```python + from zenml.config import DockerSettings - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` - The `requirements.txt` should include: - ``` - --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ - my-simple-package==0.1.0 - ``` + docker_settings = DockerSettings( + requirements=["my-simple-package==0.1.0"], + environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} + ) + ``` + - Requirements file: + ```python + docker_settings = DockerSettings(requirements="/path/to/requirements.txt") + ``` + - Example `requirements.txt`: + ``` + --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ + my-simple-package==0.1.0 + ``` ## Best Practices -- **Version Control**: Use Git for shared code repositories to facilitate collaboration. -- **Access Controls**: Implement security measures for private PyPI servers. -- **Documentation**: Maintain clear and comprehensive documentation for shared components. -- **Regular Updates**: Keep libraries updated and communicate changes to the team. -- **Continuous Integration**: Set up CI for shared libraries to ensure quality and compatibility. +- **Version Control**: Use Git for shared code repositories. +- **Access Controls**: Implement security measures for private servers. +- **Documentation**: Maintain clear and comprehensive documentation. +- **Regular Updates**: Keep libraries updated and communicate changes. +- **Continuous Integration**: Set up CI for quality assurance of shared components. -By following these guidelines, teams can effectively share code and libraries, enhancing collaboration and accelerating development within the ZenML framework. +By following these guidelines, teams can effectively share code and libraries within the ZenML framework, enhancing collaboration and accelerating development. ================================================== @@ -10007,65 +10119,51 @@ By following these guidelines, teams can effectively share code and libraries, e # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML -## Overview -ZenML architecture consists of **Stacks**, **Pipelines**, **Models**, and **Artifacts**, which are essential for organizing your ML workflow. +This guide provides an overview of how to effectively organize stacks, pipelines, models, and artifacts in ZenML, which are essential for MLOps. -### Key Concepts: -- **Stacks**: Configuration of tools and infrastructure for running pipelines. Composed of components like orchestrators and artifact stores, stacks enable seamless transitions between environments (local, staging, production) and promote reproducibility. - -- **Pipelines**: Series of steps representing tasks in the ML workflow (e.g., data preparation, model training). Modular pipelines allow independent execution and easier management. +## Key Concepts -- **Models**: Collections of related pipelines, artifacts, and metadata. Models facilitate data transfer between pipelines and help manage versions and stages. +- **Stacks**: Configuration of tools and infrastructure for running pipelines. Composed of components like orchestrators, container registries, and artifact stores. They enable seamless transitions between environments (local, staging, production) and can be reused across multiple pipelines to reduce configuration overhead and promote reproducibility. -- **Artifacts**: Outputs of pipeline steps that can be tracked and reused. Artifacts maintain a clear history of data and model versions. +- **Pipelines**: Series of steps representing tasks in the ML workflow, such as data preparation and model training. It’s best practice to separate pipelines by task (e.g., training vs. inference) for modularity and easier management. -## Stack Management -- A single stack can support multiple pipelines, reducing configuration overhead and ensuring a consistent execution environment. -- For detailed stack management, refer to the [Managing Stacks and Components](../../infrastructure-deployment/stack-deployment/README.md) guide. +- **Models**: Collections of pipelines, artifacts, and metadata tied to a specific project. Models facilitate data transfer between pipelines and can be managed through the Model Control Plane, which allows for versioning and stage management. + +- **Artifacts**: Outputs of pipeline steps that are tracked and reused across pipelines. Proper naming and logging of metadata enhance traceability and organization. + +## Organizing Your Workflow -## Organizing Pipelines, Models, and Artifacts ### Pipelines -- Separate pipelines for different tasks (e.g., training vs. inference) enhance modularity and manageability. -- Benefits include independent execution, easier code management, and improved organization of runs. +- Separate pipelines for different tasks to run them independently and manage complexity. +- Allows multiple team members to work on different pipelines without interference. ### Models -- Use Models to connect related pipelines and artifacts. They help in transferring trained models between pipelines. -- The Model Control Plane allows version management and stage assignments for models. +- Use a Model to connect related pipelines and facilitate data transfer. +- The Model Control Plane helps manage model versions and stages. ### Artifacts -- Artifacts should be named for easy identification and reuse. Each pipeline run generates a new artifact version, ensuring traceability. -- Artifacts can be linked to Models for better organization. +- Track and reuse artifacts across pipelines, ensuring clear history and traceability. +- Log metadata for better visibility in the Model Control Plane. ## Example Workflow 1. Team members create separate pipelines for feature engineering, training, and inference. -2. They use a shared stack for local testing, allowing rapid iteration. -3. The training pipeline produces model artifacts that the inference pipeline consumes. -4. The Model Control Plane tracks model versions, enabling easy comparisons and promotions to production. +2. They use a shared stack for local testing, allowing quick iterations. +3. Ensure preprocessing steps are consistent across pipelines. +4. Use a ZenML Model to link artifacts from training to inference. +5. Manage model versions with the Model Control Plane to promote the best performing model to production. ## Rules of Thumb -### Models -- One Model per ML use-case. -- Group related pipelines and artifacts within a Model. -- Manage versions and stages using the Model Control Plane. +- **Models**: One Model per ML use-case; group related resources. +- **Stacks**: Separate stacks for different environments; share production stacks for consistency. +- **Naming**: Consistent naming conventions; use tags for organization; document configurations and dependencies. -### Stacks -- Maintain distinct stacks for different environments. -- Share production and staging stacks across teams. -- Keep local stacks simple for rapid development. - -### Naming and Organization -- Use consistent naming conventions. -- Leverage tags for resource organization. -- Document stack configurations and dependencies. -- Ensure pipeline code is modular and reusable. - -Following these guidelines will help maintain an efficient and scalable MLOps workflow in ZenML. +Following these guidelines supports a clean and scalable MLOps workflow as projects grow. For further details, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === -It seems there is no documentation text provided for summarization. Please provide the text you would like summarized, and I'll be happy to assist! +It seems that the provided text is incomplete or missing. Please provide the full documentation text that you would like summarized, and I will be happy to assist you! ================================================== @@ -10073,95 +10171,89 @@ It seems there is no documentation text provided for summarization. Please provi ### Creating Your Own ZenML Template -Creating a ZenML template standardizes and shares ML workflows across projects. ZenML utilizes [Copier](https://copier.readthedocs.io/en/stable/) for managing project templates. Follow these steps to create your own template: +Creating a ZenML template helps standardize and share ML workflows. ZenML utilizes [Copier](https://copier.readthedocs.io/en/stable/) for template management. Here’s a concise guide: 1. **Create a Repository**: Set up a new repository to store your template's code and configuration files. - -2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a base to define your ML steps and pipelines. -3. **Create `copier.yml`**: This file defines the template's parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. +2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a base to define your ML steps and pipelines. -4. **Test Your Template**: Use the `copier` command to generate a new project from your template: +3. **Create `copier.yml`**: This file specifies template parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. +4. **Test Your Template**: Use the following command to generate a new project from your template: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` -5. **Use Your Template with ZenML**: Initialize a ZenML project with your template: - +5. **Use with ZenML**: Initialize your ZenML project with your template: ```bash zenml init --template https://github.com/your-username/your-template.git ``` - - For a specific version, use: - + To specify a version, use: ```bash zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` -### Example for Setting Up `e2e_batch` Template - -To follow along with the documentation using the `e2e_batch` template, run: - -```bash -mkdir e2e_batch -cd e2e_batch -zenml init --template e2e_batch --template-with-defaults -``` +### Additional Notes +- Keep your template updated with best practices. +- For practical examples, install the `e2e_batch` template: + ```bash + mkdir e2e_batch + cd e2e_batch + zenml init --template e2e_batch --template-with-defaults + ``` -### Note -Keep your template updated with best practices and changes in ML workflows. The [Production Guide](../../../../user-guide/production-guide/README.md) is based on the `E2E Batch` project template, which is recommended for installation. +This guide enables you to quickly set up new ML projects using your own ZenML templates. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md === -### ZenML Project Templates Overview +# ZenML Project Templates Overview -ZenML provides project templates to help users quickly understand the framework and build ML pipelines. These templates cover major use cases and include a simple CLI for ease of use. +**Warning:** This documentation refers to an older version of ZenML. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). -#### Available Project Templates +## Purpose of Project Templates +ZenML project templates provide a quick way to understand the ZenML framework and start building ML pipelines. They include a collection of steps, pipelines, and a simple CLI. + +## Available Project Templates | Project Template [Short name] | Tags | Description | |-------------------------------|------|-------------| -| [Starter template](https://github.com/zenml-io/template-starter) [code: `starter`] | `basic`, `scikit-learn` | Basic ML components for starting with ZenML, including parameterized steps, a model training pipeline, and a simple CLI, using scikit-learn. | -| [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [code: `e2e_batch`] | `etl`, `hp-tuning`, `model-promotion`, `drift-detection`, `batch-prediction`, `scikit-learn` | A comprehensive template with two pipelines covering data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion, drift detection, and batch inference. | -| [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [code: `nlp`] | `nlp`, `hp-tuning`, `model-promotion`, `training`, `pytorch`, `gradio`, `huggingface` | A simple NLP pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, tested locally with Gradio. | - -#### Using a Project Template - -To use the templates, install ZenML with the templates extras: - -```bash -pip install zenml[templates] -``` +| [Starter template](https://github.com/zenml-io/template-starter) [starter] | basic, scikit-learn | Basic ML components for starting with ZenML, including parameterized steps, a model training pipeline, and a simple CLI using scikit-learn. | +| [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [e2e_batch] | etl, hp-tuning, model-promotion, drift-detection, batch-prediction, scikit-learn | A comprehensive template with pipelines for data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion, drift detection, and batch inference. | +| [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [nlp] | nlp, hp-tuning, model-promotion, training, pytorch, gradio, huggingface | An NLP training pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, with local testing using Gradio. | -**Note:** These templates differ from 'Run Templates' used for triggering pipelines, which can be explored [here](https://docs.zenml.io/how-to/trigger-pipelines). +**Note:** ZenML is seeking collaboration for design partnerships. If you have a project to share, join our [Slack](https://zenml.io/slack/). -To generate a project from a template, use the `zenml init` command with the `--template` flag: +## Using a Project Template -```bash -zenml init --template -# Example: zenml init --template e2e_batch -``` +1. **Install ZenML with templates:** + ```bash + pip install zenml[templates] + ``` -For default values, add `--template-with-defaults`: +2. **Generate a project from a template:** + ```bash + zenml init --template + # Example: zenml init --template e2e_batch + ``` -```bash -zenml init --template --template-with-defaults -# Example: zenml init --template e2e_batch --template-with-defaults -``` +3. **Use default values:** + ```bash + zenml init --template --template-with-defaults + # Example: zenml init --template e2e_batch --template-with-defaults + ``` -ZenML invites collaboration for new project templates. Interested users can join their [Slack](https://zenml.io/slack/) for discussions. +**Warning:** These templates differ from 'Run Templates' used for triggering pipelines. More information on Run Templates can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/set-up-repository.md === -### Recommended Repository Structure and Best Practices for ZenML +# ZenML Repository Structure and Best Practices -#### Project Structure -A recommended structure for ZenML projects is as follows: +## Recommended Project Structure +The following is a suggested structure for a ZenML project: ```markdown . @@ -10170,15 +10262,10 @@ A recommended structure for ZenML projects is as follows: ├── steps │ ├── loader_step │ │ ├── loader_step.py -│ │ └── requirements.txt (optional) │ └── training_step -│ └── ... ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py -│ │ └── requirements.txt (optional) -│ └── deployment_pipeline -│ └── ... ├── notebooks │ └── *.ipynb ├── requirements.txt @@ -10186,11 +10273,14 @@ A recommended structure for ZenML projects is as follows: └── run.py ``` -- **Steps**: Store each step in separate Python files to manage utils and dependencies easily. -- **Pipelines**: Keep pipelines in separate files. Avoid naming pipelines or instances "pipeline" to prevent conflicts with the ZenML decorator. +### Key Points: +- **Project Templates**: All ZenML project templates follow this structure. +- **Steps and Pipelines**: Organize steps and pipelines in separate folders; simpler projects can keep steps at the top level. +- **Code Repository**: Registering your repository can enhance version tracking and speed up Docker image builds. -#### Logging -Use the `logging` module to capture logs, which will be recorded in the artifact store: +## Steps +- Store each step in separate Python files to manage utilities and dependencies effectively. +- Use the `logging` module for logging within steps, which will be recorded in the ZenML dashboard. ```python from zenml.logger import get_logger @@ -10202,119 +10292,104 @@ def training_data_loader(): logger.info("My logs") ``` -#### Docker Configuration -- **.dockerignore**: Exclude unnecessary files to optimize Docker image size and build time. -- **Dockerfile**: ZenML uses a default Docker image, but you can customize it with your own `Dockerfile`. +## Pipelines +- Keep pipelines in separate Python files. +- Avoid naming pipelines or instances "pipeline" to prevent conflicts with the imported `pipeline` decorator. +- Unique pipeline names are crucial for maintaining clear run histories. -#### Notebooks -Organize all Jupyter notebooks in a dedicated folder. +## .dockerignore +- Use `.dockerignore` to exclude unnecessary files from Docker images, improving build speed and reducing image size. -#### ZenML Initialization -Run `zenml init` at the project root to define the project scope, which helps with import paths and configuration storage. This is especially important for projects using Jupyter notebooks. +## Dockerfile (Optional) +- ZenML uses the official ZenML Docker image by default. You can customize this with your own `Dockerfile`. -#### run.py -Place the pipeline runner in the repository root to ensure correct import resolution. If no `.zen` file is defined, this file also establishes the implicit source's root. +## Notebooks +- Organize all Jupyter notebooks in a dedicated folder. -### Additional Notes -- Registering your repository can help ZenML track code versions and speed up Docker image builds. -- Ensure all import paths are relative to the source's root. +## .zen +- Run `zenml init` at the project root to define the project scope and establish the source's root, which is important for import paths and configurations. + +## run.py +- Place pipeline runners in the project root to ensure correct import resolution. If no `.zen` file exists, this file implicitly defines the source's root. + +This structure and these practices help maintain organization and efficiency in ZenML projects. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md === -### Summary of ZenML Code Repository Integration - -**Overview**: Connecting a Git repository to ZenML allows tracking of code versions and speeds up Docker image builds by avoiding unnecessary rebuilds when source code changes. +### Summary of ZenML Code Repository Documentation -#### Registering a Code Repository -1. **Install Integration**: - ```shell - zenml integration install - ``` +**Overview**: ZenML allows tracking code versions and optimizing Docker builds by connecting to code repositories like GitHub and GitLab. -2. **Register Repository**: - ```shell - zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] - ``` +#### Connecting a Git Repository +- A code repository in ZenML is a remote location for your code, facilitating version tracking for pipeline runs and speeding up Docker image builds. +- To register a code repository, install the relevant ZenML integration: + ```shell + zenml integration install + ``` +- Register using the CLI: + ```shell + zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] + ``` #### Available Implementations -ZenML supports built-in implementations for GitHub and GitLab, as well as custom repositories. - -##### GitHub -1. **Install Integration**: - ```shell - zenml integration install github - ``` - -2. **Register Repository**: - ```shell - zenml code-repository register --type=github \ - --url= --owner= --repository= \ - --token= - ``` - - - **Parameters**: - - ``: Name of the repository. - - ``: Repository owner. - - ``: Personal Access Token. - - ``: Defaults to `https://github.com` (use for GitHub Enterprise). - - - **Secure Token Storage**: - ```shell - zenml secret create github_secret --pa_token= - zenml code-repository register ... --token={{github_secret.pa_token}} - ``` - -##### GitLab -1. **Install Integration**: - ```shell - zenml integration install gitlab - ``` - -2. **Register Repository**: - ```shell - zenml code-repository register --type=gitlab \ - --url= --group= --project= \ - --token= - ``` - - - **Parameters**: - - ``: Project group. - - ``: Project name. - - ``: Personal Access Token. - - ``: Defaults to `https://gitlab.com`. - - - **Secure Token Storage**: - ```shell - zenml secret create gitlab_secret --pa_token= - zenml code-repository register ... --token={{gitlab_secret.pa_token}} - ``` - -#### Custom Code Repository -To implement a custom repository: -1. **Subclass `BaseCodeRepository`**: - ```python - class BaseCodeRepository(ABC): - @abstractmethod - def login(self) -> None: - pass - - @abstractmethod - def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: - pass +1. **GitHub**: + - Install GitHub integration: + ```shell + zenml integration install github + ``` + - Register a GitHub repository: + ```shell + zenml code-repository register --type=github \ + --url= --owner= --repository= \ + --token= + ``` + - Use secrets management for the GitHub token: + ```shell + zenml secret create github_secret --pa_token= + zenml code-repository register ... --token={{github_secret.pa_token}} + ``` - @abstractmethod - def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: - pass - ``` +2. **GitLab**: + - Install GitLab integration: + ```shell + zenml integration install gitlab + ``` + - Register a GitLab repository: + ```shell + zenml code-repository register --type=gitlab \ + --url= --group= --project= \ + --token= + ``` + - Use secrets management for the GitLab token: + ```shell + zenml secret create gitlab_secret --pa_token= + zenml code-repository register ... --token={{gitlab_secret.pa_token}} + ``` -2. **Register Custom Repository**: - ```shell - zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] - ``` +#### Developing a Custom Code Repository +- For other platforms, subclass `zenml.code_repositories.BaseCodeRepository` and implement required methods: + ```python + class BaseCodeRepository(ABC): + @abstractmethod + def login(self) -> None: + """Logs into the code repository.""" + + @abstractmethod + def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: + """Downloads files from the code repository.""" + + @abstractmethod + def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: + """Gets a local repository context from a path.""" + ``` +- Register the custom repository: + ```shell + zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] + ``` -This integration allows ZenML to track code changes and commit hashes for each pipeline run, enhancing development efficiency. +This documentation provides essential steps and commands for integrating GitHub and GitLab with ZenML, as well as guidance for creating custom code repositories. ================================================== @@ -10322,8 +10397,10 @@ This integration allows ZenML to track code changes and commit hashes for each p # Setting Up a Well-Architected ZenML Project -## Overview -This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration, which are essential for successful machine learning operations (MLOps). +This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration. + +## Importance of a Well-Architected Project +A well-architected ZenML project is vital for effective machine learning operations (MLOps), providing a foundation for efficient model development, deployment, and maintenance. ## Key Components @@ -10333,94 +10410,83 @@ This guide outlines best practices for structuring ZenML projects to enhance sca - Refer to the [Set up repository guide](./set-up-repository.md) for details. ### Version Control and Collaboration -- Integrate with Git for efficient code management and collaboration. -- Benefits include faster pipeline builds and easy change tracking. +- Integrate with Git for tracking changes and team collaboration. +- Enables faster pipeline builds by reusing images and code from the repository. - Learn more in the [Set up a repository guide](./set-up-repository.md). ### Stacks, Pipelines, Models, and Artifacts -- **Stacks**: Infrastructure and tool configurations. -- **Models**: Represent ML models and metadata. -- **Pipelines**: Encapsulate ML workflows. -- **Artifacts**: Track data and model outputs. -- See the [Organizing Stacks, Pipelines, Models, and Artifacts guide](../collaborate-with-team/stacks-pipelines-models.md) for organization strategies. +- **Stacks:** Define infrastructure and tool configurations. +- **Models:** Represent ML models and metadata. +- **Pipelines:** Encapsulate ML workflows. +- **Artifacts:** Track data and model outputs. +- See [Organizing Stacks, Pipelines, Models, and Artifacts guide](../collaborate-with-team/stacks-pipelines-models.md). ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). -- Set up service connectors and manage authorizations. -- Use [Teams in ZenML Pro](../../../getting-started/zenml-pro/teams.md) for role assignments. +- Set up [service connectors](../../infrastructure-deployment/auth-management/README.md) for authorization. +- Use [Teams in ZenML Pro](../../../getting-started/zenml-pro/teams.md) for role assignment. - Explore strategies in the [Access Management and Roles guide](../collaborate-with-team/access-management.md). ### Shared Components and Libraries -- Promote code reuse with shared components like custom flavors and steps. -- Use shared private wheels for internal distribution. -- Learn about sharing code in the [Shared Libraries and Logic for Teams guide](../collaborate-with-team/shared-components-for-teams.md). +- Promote code reuse with custom flavors, steps, and shared libraries. +- Handle authentication for specific libraries. +- More details in the [Shared Libraries and Logic for Teams guide](../collaborate-with-team/shared-components-for-teams.md). ### Project Templates -- Utilize pre-made or custom templates for consistency. -- Refer to the [Project Templates guide](../collaborate-with-team/project-templates/README.md) for usage. +- Use pre-made or custom templates for consistency in project setup. +- Learn about templates in the [Project Templates guide](../collaborate-with-team/project-templates/README.md). ### Migration and Maintenance - Strategies for migrating legacy code and upgrading ZenML servers. - Best practices are detailed in the [Migration and Maintenance guide](../../advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md#upgrading-your-code). ## Getting Started -Explore the guides in this section to build your ZenML project. Regularly review and refine your project structure to adapt to your team's needs. Following these guidelines will help create a robust, scalable, and collaborative MLOps environment. +Explore the guides in this section to begin building your ZenML project. Regularly review and refine your project structure to adapt to your team's needs, ensuring a robust MLOps environment. ================================================== === File: docs/book/how-to/model-management-metrics/README.md === -### Model Management and Metrics in ZenML +# Model Management and Metrics in ZenML -This section details the management of models and tracking of metrics within ZenML. +This section outlines the processes for managing models and tracking metrics within ZenML. -#### Key Components: +## Key Components: 1. **Model Management**: - - ZenML facilitates versioning, deployment, and monitoring of machine learning models. - - Users can register models, track their lineage, and manage different versions effectively. + - ZenML provides tools for versioning, storing, and deploying machine learning models. + - Models can be registered and organized in a centralized repository. 2. **Metrics Tracking**: - - Metrics can be logged during training and evaluation phases. - - ZenML supports integration with various metrics tracking tools (e.g., MLflow, TensorBoard). - - Users can define custom metrics and visualize them for better insights. - -3. **Implementation**: - - Use decorators and context managers to log metrics automatically. - - Example code snippet for logging metrics: - - ```python - from zenml.steps import step + - Metrics can be logged and monitored throughout the model lifecycle. + - ZenML supports integration with various tracking tools for visualization and analysis. - @step - def train_model(data): - # Training logic here - metrics = {"accuracy": 0.95} # Example metric - return metrics - ``` +3. **Version Control**: + - Each model version can be tagged and retrieved, ensuring reproducibility. + - Users can compare different model versions based on performance metrics. -4. **Model Registry**: - - Models can be registered in a centralized registry for easy access and deployment. - - Supports tagging and categorization for better organization. +4. **Deployment**: + - Models can be deployed to various environments (e.g., cloud, on-premises). + - Deployment configurations can be managed through ZenML's interface. -5. **Deployment**: - - Models can be deployed to various environments (e.g., cloud, on-premise). - - ZenML provides tools to facilitate continuous deployment and integration. +5. **Integration**: + - ZenML integrates with popular ML frameworks and tools for seamless workflow management. + - Users can leverage existing libraries for enhanced functionality. -By leveraging these features, users can ensure effective model management and comprehensive tracking of performance metrics throughout the machine learning lifecycle. +By utilizing these features, users can effectively manage their machine learning models and ensure consistent tracking of performance metrics throughout the development lifecycle. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-an-artifact.md === -### Summary: Attaching Metadata to Artifacts in ZenML +# Summary: Attaching Metadata to Artifacts in ZenML -In ZenML, metadata enhances artifacts by providing context and details such as size, structure, and performance metrics. This metadata is viewable in the ZenML dashboard, aiding in artifact inspection and comparison across pipeline runs. +In ZenML, metadata enhances artifacts by providing context such as size, structure, or performance metrics, which can be viewed in the ZenML dashboard for easier inspection and comparison. -#### Logging Metadata for Artifacts -Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. The metadata can include any JSON-serializable value, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. +## Logging Metadata for Artifacts +Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. Metadata can be any JSON-serializable value, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. -**Example of Logging Metadata:** +### Example: ```python import pandas as pd from zenml import step, log_metadata @@ -10440,12 +10506,12 @@ def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: return processed_dataframe ``` -#### Selecting the Artifact for Metadata Logging +## Selecting the Artifact for Metadata Logging 1. **Using `infer_artifact`**: Automatically selects the output artifact of the step. -2. **Name and Version**: If both are provided, ZenML attaches metadata to the specified artifact version. -3. **Artifact Version ID**: Directly attaches metadata to the specific artifact version. +2. **Name and Version**: Use both to identify a specific artifact version. +3. **Artifact Version ID**: Directly fetches the specified artifact version. -#### Fetching Logged Metadata +## Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: ```python from zenml.client import Client @@ -10454,12 +10520,12 @@ client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` -*Note: Fetching metadata by key returns the latest entry.* +*Note: The returned value reflects the latest entry for the specified key.* -#### Grouping Metadata in the Dashboard -You can group metadata into cards in the ZenML dashboard by passing a dictionary of dictionaries to the `metadata` parameter. This organizes metadata into logical sections. +## Grouping Metadata in the Dashboard +You can group metadata into cards by passing a dictionary of dictionaries in the `metadata` parameter. This organizes metadata into logical sections. -**Example of Grouping Metadata:** +### Example: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize @@ -10480,19 +10546,18 @@ log_metadata( artifact_version="version", ) ``` -In the dashboard, `model_metrics` and `data_details` will be displayed as separate cards. +In the ZenML dashboard, `model_metrics` and `data_details` will appear as separate cards. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-run.md === -### Attach Metadata to a Run in ZenML +### Summary: Attaching Metadata to a Run in ZenML -In ZenML, you can log metadata to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. +In ZenML, metadata can be logged to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Run - -When logging metadata from within a pipeline step, the metadata key follows the `step_name::metadata_key` pattern, allowing reuse of keys across different steps during execution. +When logging metadata from within a pipeline step, the metadata key follows the `step_name::metadata_key` format, allowing consistent usage across different steps. **Example: Logging Metadata in a Step** ```python @@ -10510,7 +10575,6 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... - # Log metadata at the run level log_metadata({ "run_metrics": { "accuracy": accuracy, @@ -10521,9 +10585,8 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ return classifier ``` -#### Manually Logging Metadata to a Pipeline Run - -You can also log metadata to a specific pipeline run using its run ID, useful for post-execution metrics. +#### Manually Logging Metadata +Metadata can also be attached to a specific pipeline run using the run ID, which is useful for logging post-execution metrics. **Example: Manual Metadata Logging** ```python @@ -10536,8 +10599,7 @@ log_metadata( ``` #### Fetching Logged Metadata - -To retrieve logged metadata, use the ZenML Client: +To retrieve logged metadata, use the ZenML Client. The latest entry for a specific key will be returned. **Example: Fetching Metadata** ```python @@ -10549,22 +10611,25 @@ run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` -**Note:** When fetching metadata by key, the returned value reflects the latest entry. +### Important Notes +- The `log_metadata` function can be called during or after the execution of a pipeline. +- The returned value when fetching metadata reflects the latest entry for the specified key. + +For the latest ZenML documentation, please refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-model.md === -### Summary: Attaching Metadata to a Model in ZenML +# Attaching Metadata to a Model in ZenML -ZenML enables logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, or customer-specific details, enhancing model management and performance interpretation across versions. +ZenML allows logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, and customer-specific details, aiding in model management and performance interpretation. -#### Logging Metadata - -To log metadata for a model, use the `log_metadata` function, which allows attaching key-value pairs, including metrics and JSON-serializable values (e.g., `Uri`, `Path`, `StorageSize`). +## Logging Metadata -**Example:** +To log metadata, use the `log_metadata` function, which attaches key-value pairs to a model. This can include metrics and JSON-serializable values, such as custom ZenML types (`Uri`, `Path`, `StorageSize`). +### Example Code ```python from typing import Annotated import pandas as pd @@ -10574,7 +10639,6 @@ from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: - """Train a model and log metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... @@ -10590,21 +10654,20 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactCon ) return classifier ``` +In this example, metadata is associated with the model rather than the classifier artifact, useful for summarizing various pipeline steps. -In this example, metadata is associated with the model, summarizing various pipeline steps and artifacts. - -#### Selecting Models with `log_metadata` - -ZenML offers flexible options for attaching metadata to model versions: +## Selecting Models with `log_metadata` -1. **Using `infer_model`**: Automatically infers the model from the step context. -2. **Model Name and Version**: Attach metadata to a specific model version by providing both. -3. **Model Version ID**: Directly attach metadata using a specific model version ID. +ZenML provides options for attaching metadata to model versions: +1. **Using `infer_model`**: Infers the model from the step context. +2. **Model Name and Version**: Attaches metadata to a specified model version. +3. **Model Version ID**: Directly attaches metadata to a specific model version. -#### Fetching Logged Metadata +## Fetching Logged Metadata -Once metadata is logged, it can be retrieved using the ZenML Client: +To retrieve metadata, use the ZenML Client: +### Example Code ```python from zenml.client import Client @@ -10613,8 +10676,7 @@ model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"]) ``` - -**Note**: When fetching metadata by key, the returned value reflects the latest entry. +When fetching metadata by key, the returned value reflects the latest entry. ================================================== @@ -10622,7 +10684,7 @@ print(model.run_metadata["metadata_key"]) ### Grouping Metadata in the Dashboard -To organize metadata in the ZenML dashboard, you can pass a dictionary of dictionaries to the `metadata` parameter in the `log_metadata` function. This allows for logical grouping of metadata into separate cards, enhancing visualization and understanding. +To group key-value pairs in the ZenML dashboard, pass a dictionary of dictionaries to the `metadata` parameter when logging metadata. This organizes metadata into cards for better visualization. #### Example Code: ```python @@ -10646,40 +10708,43 @@ log_metadata( ) ``` -In the ZenML dashboard, the keys "model_metrics" and "data_details" will be displayed as separate cards, each containing their respective key-value pairs. +In the ZenML dashboard, "model_metrics" and "data_details" will appear as separate cards, each displaying their respective key-value pairs. + +For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md === -### Summary of ZenML Metadata Tracking +# Tracking Your Metadata in ZenML -ZenML supports special metadata types to capture specific information, including `Uri`, `Path`, `DType`, and `StorageSize`. +ZenML provides special metadata types to capture specific information, including `Uri`, `Path`, `DType`, and `StorageSize`. Below is an example of how to use these types: -**Example Usage:** ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path -log_metadata({ - "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), - "preprocessing_script": Path("/scripts/preprocess.py"), - "column_types": { - "age": DType("int"), - "income": DType("float"), - "score": DType("int") +log_metadata( + metadata={ + "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), + "preprocessing_script": Path("/scripts/preprocess.py"), + "column_types": { + "age": DType("int"), + "income": DType("float"), + "score": DType("int") + }, + "processed_data_size": StorageSize(2500000) }, - "processed_data_size": StorageSize(2500000) -}) +) ``` -**Key Points:** -- **Uri**: Represents the source URI of the dataset. -- **Path**: Specifies the filesystem path to a preprocessing script. +### Key Points: +- **Uri**: Represents a dataset source URI. +- **Path**: Specifies the filesystem path to a script. - **DType**: Describes the data types of specific columns. - **StorageSize**: Indicates the size of processed data in bytes. -These types standardize metadata format, ensuring consistent and interpretable logging. +These types standardize metadata format, ensuring consistent and interpretable logging. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -10687,11 +10752,9 @@ These types standardize metadata format, ensuring consistent and interpretable l ### Fetching Metadata During Pipeline Composition -#### Pipeline Configuration with `PipelineContext` - To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. -**Example Code:** +#### Example Code ```python from zenml import get_pipeline_context, pipeline @@ -10706,28 +10769,34 @@ from zenml import get_pipeline_context, pipeline def my_pipeline(): context = get_pipeline_context() after = [] - for i, model_config in enumerate(context.extra["complex_parameter"]): - step_name = f"hp_tuning_search_{i}" + search_steps_prefix = "hp_tuning_search_" + + for i, model_search_configuration in enumerate(context.extra["complex_parameter"]): + step_name = f"{search_steps_prefix}{i}" cross_validation( - model_package=model_config[0], - model_class=model_config[1], + model_package=model_search_configuration[0], + model_class=model_search_configuration[1], id=step_name ) after.append(step_name) - select_best_model(search_steps_prefix="hp_tuning_search_", after=after) + + select_best_model(search_steps_prefix=search_steps_prefix, after=after) ``` -For more details on `PipelineContext` attributes and methods, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). +#### Additional Information +For more details on the attributes and methods available in `PipelineContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md === -### Accessing Meta Information in Real-Time +### Summary: Accessing Meta Information in ZenML Pipelines -#### Fetch Metadata Within Steps +This documentation outlines how to access metadata in real-time during the execution of a ZenML pipeline using the `StepContext`. -To access information about the current pipeline or step, use the `zenml.get_step_context()` function to obtain the `StepContext`: +#### Fetching Metadata with `StepContext` + +To retrieve information about the currently running pipeline or step, use the `zenml.get_step_context()` function: ```python from zenml import step, get_step_context @@ -10740,7 +10809,7 @@ def my_step(): step_name = step_context.step_run.name ``` -You can also retrieve the output storage URI and the associated Materializer class for saving outputs: +Additionally, you can access the output storage URI and the associated Materializer class for saving outputs: ```python from zenml import step, get_step_context @@ -10752,7 +10821,7 @@ def my_step(): materializer = step_context.get_output_materializer() # Output Materializer ``` -For more details on `StepContext` attributes and methods, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). +For more details on the attributes and methods available in `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== @@ -10760,10 +10829,10 @@ For more details on `StepContext` attributes and methods, refer to the [SDK Docs ### Summary: Attaching Metadata to a Step in ZenML -In ZenML, you can log metadata for a specific step using the `log_metadata` function, which accepts a dictionary of key-value pairs. This metadata can include any JSON-serializable values, such as custom classes (`Uri`, `Path`, `DType`, `StorageSize`). +In ZenML, you can log metadata for a specific step using the `log_metadata` function, which allows you to attach a dictionary of key-value pairs as metadata. This metadata can include any JSON-serializable values, including custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Step -When `log_metadata` is called within a step, it attaches the metadata to the currently executing step and its pipeline run. This is useful for logging metrics available during execution. +When `log_metadata` is called within a step, it automatically attaches the metadata to the currently executing step and its associated pipeline run. This is useful for logging metrics available during execution. **Example:** ```python @@ -10777,22 +10846,22 @@ from zenml import step, log_metadata, ArtifactConfig def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... - + log_metadata(metadata={"evaluation_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall}}) return classifier ``` -**Note:** If a pipeline execution is cached, the cached step run will copy the original metadata, but manually generated metadata post-execution will not be included. - #### Manually Logging Metadata After Execution -You can log metadata after a step's execution using identifiers for the pipeline, step, and run. +You can log metadata post-execution using identifiers for the pipeline, step, and run. **Example:** ```python from zenml import log_metadata log_metadata(metadata={"additional_info": {"a_number": 3}}, step_name="step_name", run_id_name_or_prefix="run_id_name_or_prefix") -# or + +# or + log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") ``` @@ -10805,10 +10874,15 @@ from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] + print(step.run_metadata["metadata_key"]) ``` -**Note:** Fetching metadata by key returns the latest entry. +**Note:** The fetched value will always reflect the latest entry for the specified key. + +### Important Notes +- Cached step executions will copy the original step's metadata. +- Manually generated metadata after the original step execution will not be included in cached runs. ================================================== @@ -10816,7 +10890,7 @@ print(step.run_metadata["metadata_key"]) # Tracking and Comparing Metrics and Metadata in ZenML -ZenML provides a unified method to log and manage metrics and metadata using the `log_metadata` function. This function allows logging across various entities such as models, artifacts, steps, and runs, with options for automatic logging for related entities. +ZenML provides a unified `log_metadata` function to log and manage metrics and metadata across models, artifacts, steps, and runs. ## Logging Metadata @@ -10831,10 +10905,10 @@ def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` -This logs the `accuracy` for the step, its pipeline run, and the model version if provided. +This logs the `accuracy` for the step, its pipeline run, and optionally its model version. ### Real-World Example -Here’s a more detailed example in a machine learning pipeline: +Here’s a comprehensive example of logging various metadata in a machine learning pipeline: ```python from zenml import step, pipeline, log_metadata @@ -10863,27 +10937,24 @@ def telemetry_pipeline(): analyze_flight_telemetry(efficiency) ``` -This data can be visualized in the ZenML Pro dashboard, specifically using the Experiment Comparison tool, which is currently in Alpha Preview. +This logged data can be visualized in the ZenML Pro dashboard. -## Visualizing and Comparing Metadata (Pro) +### Visualizing and Comparing Metadata (Pro) +Once metadata is logged, you can use the Experiment Comparison tool in ZenML Pro to analyze metrics across runs. Key features include: -Once metadata is logged, you can analyze and compare metrics across different runs using the Experiment Comparison tool in the ZenML Pro dashboard. +1. **Table View**: Compare metadata with change tracking. +2. **Parallel Coordinates Plot**: Visualize relationships between metrics. -### Comparison Views -The tool offers: -1. **Table View**: Compare metadata across runs with automatic change tracking. -2. **Parallel Coordinates Plot**: Visualize relationships between different metrics. - -You can compare up to 20 pipeline runs simultaneously, supporting any numerical metadata (`float` or `int`). +You can compare up to 20 pipeline runs and any numerical metadata (`float` or `int`). ### Additional Use-Cases -The `log_metadata` function supports various use-cases by specifying the target entity (e.g., model, artifact, step, or run). More details can be found in the following pages: +The `log_metadata` function supports various entities (model, artifact, step, run) with flexible parameters. For more details, refer to: - Log metadata to a step - Log metadata to a run - Log metadata to an artifact - Log metadata to a model -**Note**: Older methods for logging metadata (e.g., `log_model_metadata`, `log_artifact_metadata`, `log_step_metadata`) are deprecated. Use `log_metadata` for future implementations. +**Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for future implementations. ================================================== @@ -10891,26 +10962,27 @@ The `log_metadata` function supports various use-cases by specifying the target # Model Promotion in ZenML -## Overview -ZenML allows the promotion of model versions through various stages in their lifecycle, providing metadata to identify the state of each version. The stages include: -- **staging**: Prepared for production. +## Stages and Promotion +Model versions in ZenML progress through various lifecycle stages, which serve as metadata to indicate their state. The stages include: +- **staging**: Ready for production. - **production**: Actively running in production. -- **latest**: Represents the most recent version (not a promotable stage). -- **archived**: No longer relevant, indicating a model has moved out of other stages. +- **latest**: Represents the most recent version (not promotable). +- **archived**: No longer relevant, typically after moving from another stage. -## Promotion Methods +### Promotion Methods +Models can be promoted using three methods: -### CLI Promotion -Use the following command to promote a model version via the CLI: +#### 1. CLI +Use the following command to promote a model version: ```bash zenml model version update iris_logistic_regression --stage=... ``` -### Cloud Dashboard Promotion -Promotion through the ZenML Pro dashboard will be available soon. +#### 2. Cloud Dashboard +Promotion via the ZenML Pro dashboard is forthcoming. -### Python SDK Promotion -The most common method for promoting models is through the Python SDK: +#### 3. Python SDK +The most common method for promoting models: ```python from zenml import Model from zenml.enums import ModelStages @@ -10933,13 +11005,14 @@ def promote_to_staging(): model = get_step_context().model model.set_stage(ModelStages.STAGING, force=True) -@pipeline +@pipeline(...) def train_and_promote_model(): + ... promote_to_staging(after=["train_and_evaluate"]) ``` ## Fetching Model Versions by Stage -To load the appropriate model version by stage, specify the version: +To load a model version by its stage: ```python from zenml import Model, step, pipeline @@ -10951,20 +11024,21 @@ def svc_trainer(...) -> ...: @pipeline(model=model) def training_pipeline(...): - # training logic here + # training happens here ``` -This configuration ensures that the specified model version is used throughout the pipeline. +This allows for precise control over which model version is used in training and evaluation steps. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/linking-model-binaries-data-to-models.md === -# Linking Model Binaries/Data in ZenML +# Linking Model Binaries/Data to Models in ZenML -ZenML allows linking model artifacts generated during pipeline runs to models for lineage tracking and transparency in training, evaluation, and inference. +ZenML allows linking artifacts generated during pipeline runs to models for lineage tracking and transparency in training, evaluation, and inference processes. ## Configuring the Model at Pipeline Level + You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorators: ```python @@ -10980,7 +11054,8 @@ def my_pipeline(): This links all artifacts from the pipeline run to the specified model. ## Saving Intermediate Artifacts -To save progress during training (e.g., epoch-based training), use the `save_artifact` utility. If the step has the Model context configured, it will automatically link to the model. + +To save intermediate results, use the `save_artifact` utility function. If the step has a Model context configured, it will automatically link to it. ```python from zenml import step, Model @@ -10998,7 +11073,8 @@ def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactCon ``` ## Explicitly Linking Artifacts -To link an artifact to a model outside of a step, use the `link_artifact_to_model` function. You need the artifact and model configuration. + +To link an artifact to a model outside of a step context, use the `link_artifact_to_model` function. ```python from zenml import step, Model, link_artifact_to_model, save_artifact @@ -11013,24 +11089,24 @@ existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_ar link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` -This documentation provides methods for linking model artifacts in ZenML, ensuring efficient tracking and management of model versions and their associated artifacts. +This allows for flexibility in linking artifacts to models, whether within steps or externally. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/delete-a-model.md === -### Deleting Models in ZenML +### Summary: Deleting Models in ZenML -**Overview**: Deleting a model or a specific version removes all links to artifacts, pipeline runs, and associated metadata. +This documentation outlines the process for deleting models and their versions in ZenML, which involves removing all links to artifacts, pipeline runs, and associated metadata. #### Deleting All Versions of a Model -- **CLI Command**: +- **CLI Command:** ```shell zenml model delete ``` -- **Python SDK**: +- **Python SDK:** ```python from zenml.client import Client Client().delete_model() @@ -11038,18 +11114,18 @@ This documentation provides methods for linking model artifacts in ZenML, ensuri #### Deleting a Specific Version of a Model -- **CLI Command**: +- **CLI Command:** ```shell zenml model version delete ``` -- **Python SDK**: +- **Python SDK:** ```python from zenml.client import Client Client().delete_model_version() - ``` + ``` -This documentation provides the necessary commands to delete models and their versions using both CLI and Python SDK. +For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== @@ -11057,7 +11133,7 @@ This documentation provides the necessary commands to delete models and their ve # Model Versions Overview -Model versions in ZenML allow tracking of different iterations during the machine learning training process, with dashboard and API support for the ML lifecycle. You can associate model versions with stages (e.g., production, staging) and link them to non-technical artifacts like datasets or business data. Model versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. +Model versions in ZenML allow tracking of different iterations of a machine learning model, facilitating the full ML lifecycle with dashboard and API functionalities. Users can associate model versions with various stages (e.g., production, staging) and link them to non-technical artifacts like datasets. Model versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. ## Explicitly Naming Model Versions @@ -11077,11 +11153,11 @@ def training_pipeline(...): # training happens here ``` -If the model version exists, it is automatically associated with the pipeline. +If a model version exists, it is automatically associated with the pipeline. ## Templated Naming for Model Versions -For semantic versioning, use templated names in the `version` and/or `name` arguments: +For continuous projects, use templated naming for unique and semantically readable model versions: ```python from zenml import Model, step, pipeline @@ -11097,17 +11173,17 @@ def training_pipeline(...): # training happens here ``` -This will generate unique, readable names for each run, like `experiment_with_phi_3_2024_08_30_12_42_53`. Substitutions can be set at different levels (pipeline or step). +This will generate model versions like `experiment_with_phi_3_2024_08_30_12_42_53`. Standard substitutions include `{date}` and `{time}`. ## Fetching Model Versions by Stage -Assign stages to model versions (e.g., `production`, `staging`) for semantic retrieval: +Assign stages to model versions (e.g., `production`, `staging`) for easier retrieval: ```shell zenml model version update MODEL_NAME --stage=STAGE ``` -You can then fetch the model version using its stage: +To fetch a model version by stage: ```python from zenml import Model, step, pipeline @@ -11137,78 +11213,94 @@ def svc_trainer(...) -> ...: ... ``` -This creates a new version, incrementing the sequence. For example: +ZenML tracks the version sequence: ```python +from zenml import Model + earlier_version = Model(name="my_model", version="really_good_version").number # == 5 updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` -This ensures that each new model version is tracked correctly in the iteration sequence. +This ensures proper versioning and iteration tracking throughout the model's lifecycle. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === -### Summary: Structuring an MLOps Project +### Structuring an MLOps Project with ZenML -#### Overview -An MLOps project typically consists of multiple pipelines that manage the flow of data and models. Key pipeline types include: -- **Feature Engineering Pipeline**: Prepares raw data. -- **Training Pipeline**: Trains models using prepared data. -- **Inference Pipeline**: Runs predictions using trained models. -- **Deployment Pipeline**: Deploys models to production. +This documentation outlines how to structure an MLOps project using ZenML, focusing on the integration of artifacts, models, and pipelines. -The structure of these pipelines can vary based on project requirements, and they often need to share artifacts, models, and metadata. +#### Key Components: +1. **Pipelines**: MLOps projects typically consist of multiple pipelines, including: + - **Feature Engineering Pipeline**: Prepares raw data. + - **Training Pipeline**: Trains models using processed data. + - **Inference Pipeline**: Runs predictions using trained models. + - **Deployment Pipeline**: Deploys models to production. -#### Artifact Exchange Patterns +The structure of these pipelines can vary based on project requirements, and they often need to share information such as artifacts and metadata. -1. **Artifact Exchange via Client** - - Use the ZenML Client to exchange artifacts between pipelines. - - Example: - ```python - from zenml import pipeline - from zenml.client import Client +#### Common Patterns for Artifact Exchange: - @pipeline - def feature_engineering_pipeline(): - train_data, test_data = prepare_data() +**Pattern 1: Artifact Exchange via `Client`** +- Use the ZenML Client to exchange artifacts between pipelines. For example, a feature engineering pipeline produces datasets that the training pipeline consumes. - @pipeline - def training_pipeline(): - client = Client() - train_data = client.get_artifact_version(name="iris_training_dataset") - test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") - sklearn_classifier = model_trainer(train_data) - model_evaluator(model, sklearn_classifier) - ``` - - Note: `train_data` and `test_data` are references, not materialized in memory. - -2. **Artifact Exchange via Model** - - Use ZenML Model as a reference point for artifacts. - - Example: - ```python - from zenml import step, get_step_context +```python +from zenml import pipeline +from zenml.client import Client - @step(enable_cache=False) - def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: - model = get_step_context().model.get_model_artifact("trained_model") - predictions = pd.Series(model.predict(data)) - return predictions - ``` - - Alternatively, resolve artifacts at the pipeline level: - ```python - from zenml import get_pipeline_context, pipeline, Model - from zenml.enums import ModelStages +@pipeline +def feature_engineering_pipeline(): + train_data, test_data = prepare_data() - @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) - def do_predictions(): - model = get_pipeline_context().model.get_model_artifact("trained_model") - predict(model=model, data=load_data()) - ``` +@pipeline +def training_pipeline(): + client = Client() + train_data = client.get_artifact_version(name="iris_training_dataset") + test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") + sklearn_classifier = model_trainer(train_data) + model_evaluator(model, sklearn_classifier) +``` + +*Note*: Artifacts are referenced, not materialized in memory during pipeline execution. + +**Pattern 2: Artifact Exchange via `Model`** +- Use ZenML Models as references for artifact exchange. For instance, a training pipeline (`train_and_promote`) generates models, while an inference pipeline (`do_predictions`) uses the latest promoted model without needing to know artifact IDs. + +```python +from zenml import step, get_step_context + +@step(enable_cache=False) +def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: + model = get_step_context().model.get_model_artifact("trained_model") + predictions = pd.Series(model.predict(data)) + return predictions +``` + +*Alternative Approach*: +Resolve the model artifact at the pipeline level to avoid caching issues. + +```python +from zenml import get_pipeline_context, pipeline, Model +from zenml.enums import ModelStages + +@step +def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: + return pd.Series(model.predict(data)) + +@pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) +def do_predictions(): + model = get_pipeline_context().model + inference_data = load_data() + predict(model=model.get_model_artifact("trained_model"), data=inference_data) + +if __name__ == "__main__": + do_predictions() +``` #### Conclusion -Choosing between artifact exchange methods depends on project needs and personal preference. Both methods effectively facilitate the sharing of models and artifacts across pipelines. +Both artifact exchange patterns are valid; the choice depends on project needs and preferences. For more details on setting up a ZenML project, refer to the [best practices](https://docs.zenml.io). ================================================== @@ -11219,7 +11311,7 @@ Choosing between artifact exchange methods depends on project needs and personal ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline -To load the active model in a ZenML pipeline, you can access model metadata and associated artifacts as follows: +You can load the active model within a pipeline to access model metadata and associated artifacts. ```python from zenml import step, pipeline, get_step_context, Model @@ -11237,7 +11329,7 @@ def my_step(): ``` ### 2. Load Any Model via the Client -You can also load any model using the `Client`: +You can also load a model using the `Client` to retrieve specific model versions. ```python from zenml import step @@ -11255,7 +11347,7 @@ def model_evaluator_step(): staging_zenml_model = None ``` -This documentation provides methods to load models in ZenML, either through active pipeline context or using the Client API. +This documentation outlines two methods for loading models in ZenML: using the active model in a pipeline and utilizing the `Client` to access any model version. ================================================== @@ -11263,7 +11355,7 @@ This documentation provides methods to load models in ZenML, either through acti # Model Registration in ZenML -Models can be registered in ZenML through various methods: CLI, Python SDK, or implicitly during a pipeline run. ZenML Pro users can also utilize a dashboard interface for model registration. +Models can be registered in ZenML through various methods: explicit registration via CLI, Python SDK, or implicit registration during a pipeline run. ZenML Pro users can also utilize a dashboard interface for model registration. ## Explicit CLI Registration To register a model using the CLI, use the following command: @@ -11271,16 +11363,16 @@ To register a model using the CLI, use the following command: ```bash zenml model register iris_logistic_regression --license=... --description=... ``` -For additional options, run `zenml model register --help`. You can also add tags using the `--tag` option. + +For additional options, run `zenml model register --help`. You can also associate tags using the `--tag` option. ## Explicit Dashboard Registration -ZenML Pro users can register models directly from the cloud dashboard. +ZenML Pro users can register models directly from the cloud dashboard interface. ## Explicit Python SDK Registration -To register a model using the Python SDK: +To register a model with the Python SDK, use: ```python -from zenml import Model from zenml.client import Client Client().create_model( @@ -11292,11 +11384,10 @@ Client().create_model( ``` ## Implicit Registration by ZenML -Models can be registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator: +Models are commonly registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator. Here’s an example of a training pipeline: ```python -from zenml import pipeline -from zenml import Model +from zenml import pipeline, Model @pipeline( enable_cache=False, @@ -11309,67 +11400,51 @@ from zenml import Model def train_and_promote_model(): ... ``` -This approach creates a new model version while linking to the associated artifacts. + +Running this pipeline creates a new model version and links it to the artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === -### Summary of Documentation on Loading Artifacts from a Model +# Summary of Loading Artifacts from a Model -This documentation outlines how to load artifacts from a model in a two-pipeline project, where the first pipeline handles training and the second performs batch inference using the trained model artifacts. - -#### Key Points: +This documentation explains how to load artifacts between pipelines in a machine learning project using ZenML. It focuses on a two-pipeline setup where the first pipeline trains a model, and the second pipeline performs batch inference using the trained model artifacts. -1. **Model Context**: - - Use `get_pipeline_context().model` to access the model context during pipeline execution. - - The model version (e.g., `ModelStages.PRODUCTION`) may change before execution, affecting artifact retrieval. +## Key Points: -2. **Artifact Loading**: - - Artifacts are loaded at runtime, ensuring the correct version is used during step execution. - - Example of loading a trained model artifact: - ```python - model.get_model_artifact("trained_model") - ``` +1. **Model Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is evaluated at runtime, not during pipeline compilation. -3. **Pipeline Example**: - - The `do_predictions` pipeline demonstrates how to perform inference: - ```python - @pipeline( - model=Model(name="iris_classifier", version=ModelStages.PRODUCTION), - ) - def do_predictions(): - model = get_pipeline_context().model - inference_data = load_data() - predict(model=model.get_model_artifact("trained_model"), data=inference_data) - ``` +2. **Artifact Loading**: + - Use `model.get_model_artifact("trained_model")` to load the trained model artifact during inference. + - The artifact retrieval is delayed until the step is executed. -4. **Alternative Method**: - - An alternative approach using the `Client` class to directly fetch the model version: - ```python - from zenml.client import Client +3. **Alternative Method**: You can also use the `Client` class to directly fetch the model version: + ```python + from zenml.client import Client - @pipeline - def do_predictions(): - model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) - inference_data = load_data() - predict(model=model.get_model_artifact("trained_model"), data=inference_data) - ``` + @pipeline + def do_predictions(): + model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) + inference_data = load_data() + predict( + model=model.get_model_artifact("trained_model"), + data=inference_data, + ) + ``` -5. **Execution Timing**: - - Artifact evaluation occurs during the actual step run, ensuring the latest model is utilized. +4. **Execution Timing**: In both methods, the actual artifact evaluation occurs during the step execution, ensuring that the most current model version is used. -This concise overview captures the essential technical details for understanding how to load artifacts from a model in ZenML. +This concise approach ensures that critical information about loading artifacts in ZenML pipelines is retained while eliminating redundancy. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/associate-a-pipeline-with-a-model.md === -### Summary: Associating a Pipeline with a Model +# Summary: Associating a Pipeline with a Model -To associate a pipeline with a model in ZenML, use the `@pipeline` decorator. This allows you to create a new version of the model if it already exists or attach the pipeline to an existing model version. +To associate a pipeline with a model in ZenML, use the following code structure: -#### Example Code: ```python from zenml import pipeline from zenml import Model @@ -11379,15 +11454,17 @@ from zenml.enums import ModelStages model=Model( name="ClassificationModel", # Unique model name tags=["MVP", "Tabular"], # Tags for filtering - version=ModelStages.LATEST # Specify model stage: [STAGING, PRODUCTION] + version=ModelStages.LATEST # Specify model version or stage ) ) def my_pipeline(): ... ``` -#### Configuration File Option: -You can also define the model configuration in a YAML file: +This code associates the pipeline with the specified model. If the model exists, a new version is created. To attach the pipeline to an existing model version, specify the version accordingly. + +Additionally, model configuration can be stored in a configuration file: + ```yaml model: name: text_classifier @@ -11395,7 +11472,7 @@ model: tags: ["classifier", "sgd"] ``` -This setup allows for organized model management and easy version control within your ZenML pipelines. +This allows for better management and organization of model attributes. ================================================== @@ -11403,13 +11480,13 @@ This setup allows for organized model management and easy version control within # Use the Model Control Plane -A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and essential business data, representing your ML product's business logic. It can be viewed as a "project" or "workspace." +A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and business data, encapsulating the logic of ML products. It can be viewed as a "project" or "workspace." **Key Points:** -- The technical model (model files with weights and parameters) is a primary artifact associated with a ZenML Model, but other artifacts like training data and production predictions are also included. -- Models are first-class citizens in ZenML, managed through a unified API and the ZenML Pro dashboard. -- A Model captures lineage information and supports versioning, allowing you to stage different Model versions (e.g., `Production`) and make promotion decisions based on business rules. -- The Model Control Plane provides a centralized interface for managing models, combining pipeline logic, artifacts, and business data with the technical model. +- The technical model (model file/files with weights and parameters) is a primary artifact associated with a ZenML Model, but other artifacts like training data and production predictions are also included. +- Models are first-class entities in ZenML, accessible through the ZenML API, client, and the ZenML Pro dashboard. +- Models capture lineage information and support version staging, allowing users to manage predictions based on specific stages (e.g., `Production`) and apply business rules for version promotion. +- The Model Control Plane provides a unified interface for managing models, integrating pipelines, artifacts, and business data with the technical model. For a complete example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). @@ -11419,25 +11496,19 @@ For a complete example, refer to the [starter guide](../../../user-guide/starter # Advanced Topics in ZenML -This section addresses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. +This section discusses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. ## Key Features -1. **Custom Components**: Users can create custom components to extend ZenML's capabilities, allowing for tailored data processing and model training. +1. **Custom Components**: Users can create and integrate custom components into their pipelines, allowing for tailored data processing and model training. -2. **Pipelines**: ZenML supports complex pipelines that can be configured with various steps, including data ingestion, preprocessing, model training, and evaluation. +2. **Pipeline Configuration**: Advanced configurations enable users to define pipeline parameters, execution environments, and resource allocation for optimized performance. -3. **Artifact Management**: ZenML provides mechanisms for managing artifacts generated during pipeline execution, ensuring reproducibility and traceability. +3. **Artifact Management**: ZenML supports versioning and management of artifacts, ensuring reproducibility and traceability of experiments. -4. **Integrations**: The framework integrates with various tools and platforms (e.g., MLflow, TensorFlow, and Kubernetes) to streamline workflows. +4. **Integration with ML Tools**: ZenML can be integrated with various machine learning tools and platforms, facilitating seamless workflows. -5. **Versioning**: ZenML supports versioning of pipelines and components, enabling users to track changes and manage different iterations effectively. - -## Configuration - -- **Settings**: Configuration settings can be adjusted in the ZenML configuration file, allowing users to specify parameters like logging levels, storage backends, and execution environments. - -- **Environment Setup**: Users can set up different environments (e.g., local, cloud) to optimize performance and resource utilization. +5. **Monitoring and Logging**: Users can implement monitoring and logging to track pipeline performance and troubleshoot issues effectively. ## Example Code Snippet @@ -11446,137 +11517,121 @@ from zenml.pipelines import pipeline from zenml.steps import step @step -def data_ingestion(): - # Ingest data +def data_preprocessing(): + # Data preprocessing logic pass @step def model_training(data): - # Train model + # Model training logic pass @pipeline def my_pipeline(): - data = data_ingestion() + data = data_preprocessing() model_training(data) - -# Run the pipeline -my_pipeline() ``` -This concise overview provides essential insights into the advanced capabilities of ZenML, ensuring users can leverage its features effectively. +This concise overview of advanced topics in ZenML highlights essential features and capabilities, enabling users to leverage the framework effectively for complex machine learning workflows. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === -### Using a Prebuilt Image for ZenML Pipeline Execution - -ZenML allows you to skip building a Docker image for your pipeline by using a prebuilt image. This can save time and costs, especially if your dependencies are large or your local system is slow. However, using a prebuilt image means you won't receive updates to your code or dependencies unless they are included in the image. - -#### How to Use a Prebuilt Image - -To utilize a prebuilt image, configure the `DockerSettings` class by setting the `parent_image` and `skip_build` attributes: - -```python -docker_settings = DockerSettings( - parent_image="my_registry.io/image_name:tag", - skip_build=True -) - -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` - -Ensure the image is accessible to the orchestrator and other components without ZenML's involvement. - -#### Requirements for the Parent Image +### Summary of ZenML Documentation on Using Prebuilt Docker Images -The prebuilt image must contain: -- All dependencies required to run your pipeline. -- Any code files if no code repository is registered and `allow_download_from_artifact_store` is `False`. - -If using an image built by ZenML from a previous run, it can be reused as long as it was built for the same stack. - -#### Stack and Integration Requirements - -To determine stack requirements: - -```python -from zenml.client import Client - -stack_name = -Client().set_active_stack(stack_name) -active_stack = Client().active_stack -stack_requirements = active_stack.requirements() -``` +**Overview**: This documentation explains how to skip building a Docker image for ZenML pipelines by using a prebuilt image, which can save time and costs during pipeline execution. -For integration dependencies: +**Key Points**: +- **Docker Image Building**: Normally, ZenML builds a Docker image with a base ZenML image and project dependencies. This can be time-consuming due to pulling base layers and pushing the final image. +- **Prebuilt Image Usage**: To avoid building an image, you can specify a prebuilt image in the `DockerSettings` class by setting the `parent_image` attribute and `skip_build` to `True`. +**Code Example**: ```python -from zenml.integrations.registry import integration_registry -from zenml.integrations.constants import HUGGINGFACE, PYTORCH -import itertools - -required_integrations = [PYTORCH, HUGGINGFACE] -integration_requirements = set( - itertools.chain.from_iterable( - integration_registry.select_integration_requirements( - integration_name=integration, - target_os=OperatingSystemType.LINUX, - ) - for integration in required_integrations - ) +docker_settings = DockerSettings( + parent_image="my_registry.io/image_name:tag", + skip_build=True ) -``` -#### Project-Specific and System Packages - -For project-specific dependencies, include them in your `Dockerfile`: - -```Dockerfile -RUN pip install -r FILE +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... ``` +- Ensure the prebuilt image is available in a registry accessible by the orchestrator. -For system packages, use: - -```Dockerfile -RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES -``` +**Requirements for the Parent Image**: +1. **Dependencies**: The prebuilt image must contain all necessary dependencies for the pipeline to run. +2. **Stack Requirements**: Use the following code to retrieve stack requirements: + ```python + from zenml.client import Client -#### Including Project Code Files + stack_name = + Client().set_active_stack(stack_name) + active_stack = Client().active_stack + stack_requirements = active_stack.requirements() + ``` +3. **Integration Requirements**: Gather integration dependencies using: + ```python + from zenml.integrations.registry import integration_registry + from zenml.integrations.constants import HUGGINGFACE, PYTORCH + + required_integrations = [PYTORCH, HUGGINGFACE] + integration_requirements = set( + itertools.chain.from_iterable( + integration_registry.select_integration_requirements( + integration_name=integration, + target_os=OperatingSystemType.LINUX, + ) + for integration in required_integrations + ) + ) + ``` +4. **Project-Specific Requirements**: Install project dependencies via: + ```Dockerfile + RUN pip install -r FILE + ``` +5. **System Packages**: Include necessary `apt` packages: + ```Dockerfile + RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES + ``` +6. **Project Code Files**: Ensure your pipeline code is available: + - If a code repository is registered, ZenML will fetch the code. + - If `allow_download_from_artifact_store` is `True`, ZenML will upload code to the artifact store. + - If both options are disabled, include code files directly in the image (not recommended). -- If a code repository is registered, ZenML will handle code retrieval. -- If `allow_download_from_artifact_store` is `True`, ZenML will upload your code. -- If both options are disabled, include your code files in the image, ideally in the `/app` directory. +**Additional Notes**: +- Ensure Python, `pip`, and `zenml` are installed in the image. +- Using a prebuilt image limits the ability to leverage updates to code or dependencies unless included in the image. -Ensure Python, `pip`, and `zenml` are installed in your image. +For further details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === -### ZenML Image Building and File Management +### ZenML Image Building Overview ZenML determines the root directory of source files in the following order: -1. If `zenml init` has been executed in the current or parent directory, that directory is the root. -2. If not, the parent directory of the executing Python file is used as the root. +1. If `zenml init` has been run in the current or parent directory, that directory is the root. +2. If not, the parent directory of the executing Python file is used as the source root. -You can control file handling in the root directory using the `DockerSettings` attributes: +For managing files in the root directory, use the following attributes in `DockerSettings`: -- **`allow_download_from_code_repository`**: If `True`, files from a registered code repository (with no local changes) will be downloaded instead of included in the image. -- **`allow_download_from_artifact_store`**: If the previous option is `False`, and no suitable code repository exists, files will be archived and uploaded to the artifact store if this is `True`. -- **`allow_including_files_in_images`**: If both previous options are `False`, files will be included in the Docker image if this is `True`. Modifications to code files will require rebuilding the Docker image. +- **`allow_download_from_code_repository`**: If `True` and the files are in a registered code repository with no local changes, files are downloaded from the repository instead of being included in the image. + +- **`allow_download_from_artifact_store`**: If the previous option is `False` or no suitable repository exists, and this is `True`, ZenML uploads your code to the artifact store. + +- **`allow_including_files_in_images`**: If both previous options are `False`, and this is `True`, files are included in the Docker image, requiring a new image build for any code changes. -**Warning**: Setting all attributes to `False` is not recommended, as it may lead to unintended behavior. You must ensure all files are correctly located in the Docker images used for pipeline execution. +**Warning**: Setting all attributes to `False` is not recommended, as it may lead to unexpected behavior. You must ensure all files are correctly placed in the Docker images used for pipeline execution. -### File Exclusion and Inclusion +### File Management -- **Excluding Files**: Use a `.gitignore` file to exclude files when downloading from a code repository. -- **Including Files**: To exclude files when including them in the Docker image, use a `.dockerignore` file. This can be done by either: - - Creating a `.dockerignore` in the source root directory. - - Specifying a `.dockerignore` file explicitly: +- **Excluding Files**: To exclude files when downloading from a repository, use a `.gitignore` file. + +- **Including Files**: When including files, ZenML copies all contents of the root directory into the Docker image. To exclude files and reduce image size, use a `.dockerignore` file, which can be specified in two ways: + - Place a `.dockerignore` file in the source root directory. + - Specify a `.dockerignore` file explicitly: ```python docker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @@ -11594,11 +11649,11 @@ This setup allows for efficient management of files in ZenML Docker images. ### Summary of Docker Settings Customization in ZenML -ZenML allows customization of Docker settings at the step level within a pipeline. By default, all steps use the same Docker image defined at the pipeline level. However, specific steps may require different Docker images due to unique requirements. This can be achieved by using the `DockerSettings` in the step decorator or through a configuration file. +ZenML allows customization of Docker settings at the step level, enabling the use of different Docker images for specific steps in a pipeline. By default, all steps utilize the Docker image defined at the pipeline level. -#### Customizing Docker Settings in Step Decorator +#### Customizing Docker Settings in a Step -To customize Docker settings directly in the step decorator, use the following code: +To customize Docker settings for a step, use the `DockerSettings` in the step decorator: ```python from zenml import step @@ -11615,9 +11670,9 @@ def training(...): ... ``` -#### Customizing Docker Settings in Configuration File +#### Alternative Configuration via YAML -Alternatively, you can define Docker settings in a configuration file as shown below: +Docker settings can also be specified in a configuration file: ```yaml steps: @@ -11633,19 +11688,19 @@ steps: - numpy ``` -This flexibility allows for tailored environments for different steps within the same pipeline. +For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md === -### How to Use a Private PyPI Repository +### Using a Private PyPI Repository To use a private PyPI repository that requires authentication, follow these steps: -1. **Store Credentials Securely**: Use environment variables for sensitive information. -2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installation. -3. **Custom Docker Images**: Consider creating Docker images with the necessary authentication pre-configured. +1. **Store Credentials Securely**: Use environment variables for credentials. +2. **Configure Package Managers**: Set up `pip` or `poetry` to use these credentials for package installations. +3. **Custom Docker Images**: Consider using Docker images with the necessary authentication configured. #### Example Code for Authentication Setup @@ -11672,172 +11727,134 @@ if __name__ == "__main__": my_pipeline() ``` -**Important Note**: Handle credentials with care. Always use secure methods for managing and distributing authentication information within your team. +**Important Note**: Handle credentials with care, using secure methods for management and distribution within your team. ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md === -### Summary: Reusing Builds in ZenML +# Reusing Builds in ZenML -#### Overview -This guide explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. - -#### What is a Build? -A build represents a specific execution of a pipeline with its associated stack. It contains necessary Docker images and can optionally include pipeline code. +## Overview +ZenML allows for the reuse of builds to enhance pipeline run efficiency. When a pipeline is executed, ZenML checks for an existing build that matches the pipeline and stack; if found, it reuses it; otherwise, a new build is created. -**List Builds:** -```bash -zenml pipeline builds list --pipeline_id='startswith:ab53ca' -``` +## What is a Build? +A build encapsulates a pipeline and its associated stack, including Docker images with all necessary requirements. It may also contain the pipeline code. -**Create a Build:** -```bash -zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance -``` +### CLI Commands +- **List Builds:** + ```bash + zenml pipeline builds list --pipeline_id='startswith:ab53ca' + ``` +- **Create a Build:** + ```bash + zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance + ``` -#### Reusing Builds -ZenML automatically reuses existing builds if they match the pipeline and stack. You can specify a build ID to force the use of a particular build. Note that using a custom build will execute the code bundled in the Docker image, not your local changes. To incorporate local changes while reusing a build, disconnect your code from the build by either registering a code repository or using the artifact store. +## Reusing Builds +ZenML automatically finds existing builds. You can specify a build ID in the pipeline configuration to force the use of a specific build. Note that using a specific build will execute the code in the Docker image, not the local code. To include local changes, disconnect your code from the build by either registering a code repository or using the artifact store. -#### Using the Artifact Store +## Artifact Store ZenML can upload your code to the artifact store by default if no code repository is detected. This allows for code reuse without needing to rebuild Docker images. -#### Code Repositories -Connecting a git repository speeds up Docker builds by avoiding the inclusion of source files during image creation. Instead, files are downloaded into the container before execution. ZenML automatically identifies matching builds, eliminating the need to specify build IDs in a clean repository state. +## Code Repositories +Connecting a git repository can significantly speed up Docker builds. When a pipeline is run from a local repository, ZenML builds Docker images without including source files and downloads them into the container before execution. This method allows for the reuse of images built by colleagues. -**Install GitHub Integration:** +### Integration Installation +To utilize a code repository, ensure the relevant integrations are installed: ```sh zenml integration install github ``` -#### Detecting Local Code Repositories -ZenML checks if the files used in a pipeline are tracked in registered code repositories, computing the source root and verifying its inclusion in a local checkout. +## Detecting Local Code Repositories +ZenML checks if the files used in a pipeline are tracked in registered code repositories. This involves computing the source root and verifying its inclusion in a local checkout. -#### Tracking Code Versions -When a local repository is detected, ZenML stores the current commit reference for the pipeline run. This tracking only occurs if the local checkout is clean, ensuring the pipeline runs with the exact code from the repository. +## Tracking Code Versions +If a local repository is detected, ZenML stores a reference to the current commit for the pipeline run, ensuring reproducibility. This reference is only tracked if the local checkout is clean. -#### Best Practices -- Ensure the local checkout is clean and the latest commit is pushed to avoid file download failures. -- For options to disable or enforce file downloading, refer to the Docker settings documentation. +## Best Practices +- Ensure your local checkout is clean and the latest commit is pushed to avoid file download failures. +- For options to disable or enforce file downloads, refer to the [Docker settings documentation](./docker-settings-on-a-pipeline.md). -This guide provides essential practices for reusing builds in ZenML, enhancing efficiency while ensuring code integrity. +By following these guidelines, you can effectively reuse builds and optimize your ZenML pipeline runs. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === -# Specifying Pip Dependencies and Apt Packages +# Summary of Specifying Pip Dependencies and Apt Packages -**Note:** Configuration for pip and apt dependencies applies only to remote pipelines, not local ones. - -When a pipeline runs with a remote orchestrator, a Dockerfile is dynamically generated to build the Docker image. You can import `DockerSettings` using: - -```python -from zenml.config import DockerSettings -``` +## Overview +The configuration for specifying pip and apt dependencies is applicable only in remote pipelines, as local pipelines do not utilize Docker images. When a pipeline runs with a remote orchestrator, a Dockerfile is generated at runtime to build the Docker image. -By default, ZenML installs all packages required by your active stack. You can specify additional packages in several ways: +## Key Points +- **DockerSettings Import**: Use `from zenml.config import DockerSettings`. +- **Automatic Package Installation**: ZenML installs all packages required by the active stack by default. -1. **Replicate Local Environment:** +### Methods to Specify Packages +1. **Replicate Local Environment**: ```python docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... ``` -2. **Custom Command for Requirements:** +2. **Custom Command for Requirements**: ```python - docker_settings = DockerSettings(replicate_local_python_environment=[ - "poetry", "export", "--extras=train", "--format=requirements.txt" - ]) - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... + docker_settings = DockerSettings(replicate_local_python_environment=["poetry", "export", "--extras=train", "--format=requirements.txt"]) ``` -3. **Specify Requirements in Code:** +3. **Specify Requirements in Code**: ```python docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... ``` -4. **Specify a Requirements File:** +4. **Specify a Requirements File**: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... ``` -5. **Specify ZenML Integrations:** +5. **Specify ZenML Integrations**: ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY - docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... ``` -6. **Specify Apt Packages:** +6. **Specify Apt Packages**: ```python docker_settings = DockerSettings(apt_packages=["git"]) - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... ``` -7. **Disable Automatic Stack Requirement Installation:** +7. **Prevent Automatic Installation**: ```python docker_settings = DockerSettings(install_stack_requirements=False) - - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... ``` -8. **Custom Docker Settings for Steps:** +8. **Custom Docker Settings for Steps**: ```python docker_settings = DockerSettings(requirements=["tensorflow"]) - - @step(settings={"docker": docker_settings}) - def my_training_step(...): - ... ``` -**Installation Order:** +### Installation Order +ZenML installs packages in the following order: - Local Python environment packages -- Stack requirements (if not disabled) +- Stack requirements (unless disabled) - Required integrations - Specified requirements -**Additional Installer Arguments:** +### Additional Installer Arguments +To customize the package installer: ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) - -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... ``` -**Experimental `uv` Installer:** +### Experimental Feature: Using `uv` +To use `uv` for faster package installation: ```python docker_settings = DockerSettings(python_package_installer="uv") - -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... ``` -*Note:* `uv` may be less stable than `pip`. Refer to [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/) for details on using `uv` with PyTorch. +Note: `uv` is experimental and may lead to installation errors; switch back to `pip` if issues arise. + +### Documentation Reference +For more details on using `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ================================================== @@ -11845,18 +11862,18 @@ def my_pipeline(...): ### Summary of ZenML Docker Integration -ZenML allows users to specify a custom Dockerfile, build context directory, and build options for dynamic image creation during pipeline execution. The build process operates as follows: +ZenML allows users to specify custom Dockerfiles, build contexts, and build options for dynamic image creation during pipeline execution. The build process operates as follows: -- **No Dockerfile Specified**: If requirements or environment variables necessitate an image build, ZenML builds a new image; otherwise, it uses the specified `parent_image`. +- **No Dockerfile Specified**: If the pipeline requires an image (due to requirements, environment variables, or file copying), ZenML builds a new image. Otherwise, it uses the specified `parent_image`. -- **Dockerfile Specified**: ZenML builds an image from the specified Dockerfile. If additional requirements necessitate another image, it builds a second image; otherwise, it uses the first image for the pipeline. +- **Dockerfile Specified**: ZenML builds an image from the provided Dockerfile. If additional requirements necessitate another image, ZenML builds a second image; otherwise, it uses the first image for the pipeline. -The installation of packages follows this order (each step optional): -1. Packages from the local Python environment. +The `DockerSettings` configuration determines the order of package installations: +1. Local Python environment packages. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. -**Note**: The intermediate image may also be used directly to execute pipeline steps based on Docker settings. +**Note**: The intermediate image may also be used directly to execute pipeline steps. ### Example Code ```python @@ -11872,21 +11889,23 @@ docker_settings = DockerSettings( @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... -``` - -This concise overview captures the essential technical details and code structure necessary for understanding ZenML's Docker integration. +``` ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === -### Image Builder Definition in ZenML +### ZenML Image Builder Overview -ZenML executes pipeline steps sequentially in the active Python environment when running locally. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in isolated environments. By default, execution environments are created locally using the local Docker client, which requires Docker installation and permissions. +ZenML allows for the execution of pipeline steps in a local Python environment or through remote orchestrators and step operators. When using remote setups, ZenML builds Docker images to ensure a consistent and isolated execution environment. -ZenML provides **image builders**, a specialized stack component for building and pushing Docker images in a dedicated environment. If no image builder is configured in the stack, ZenML defaults to the local image builder, ensuring consistency across builds. In this case, the image builder environment matches the client environment. +#### Key Points: +- **Execution Environment**: By default, ZenML uses the local Docker client to create execution environments, necessitating Docker installation and permissions. +- **Image Builders**: ZenML provides specialized image builders as stack components to build and push Docker images in dedicated environments. +- **Local Image Builder**: If no specific image builder is configured, ZenML defaults to the local image builder, ensuring consistency across builds. +- **Integration**: Users do not need to directly interact with image builders; as long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by components requiring container image builds. -Users do not need to interact directly with the image builder in their code. As long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by any component that requires container image building. +For more details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== @@ -11894,41 +11913,48 @@ Users do not need to interact directly with the image builder in their code. As ### Summary: Using Docker Images to Run Your Pipeline -#### Overview -When running a pipeline with a remote orchestrator, ZenML dynamically generates a Dockerfile at runtime to build a Docker image. The Dockerfile includes the following steps: +This documentation outlines how to configure Docker settings for running pipelines in ZenML, particularly when using a remote orchestrator. A Dockerfile is dynamically generated at runtime to build a Docker image, following these key steps: -1. **Base Image**: Starts from a parent image with ZenML installed, typically the official ZenML image. -2. **Dependency Installation**: Automatically installs required pip dependencies based on stack integrations. +1. **Base Image**: Starts from a parent image with ZenML installed, typically the official ZenML image. Custom base images can be specified. +2. **Dependency Installation**: Automatically installs required pip dependencies based on the integrations used in the stack. Custom dependencies can be included as needed. 3. **Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. -For customization options, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). +For detailed configuration options, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). -#### Configuring Docker Settings -You can configure Docker settings using the `DockerSettings` class: +### Configuring Docker Settings + +Docker settings can be customized using the `DockerSettings` class: ```python from zenml.config import DockerSettings ``` -**Pipeline Configuration**: Apply settings to all pipeline steps: +#### Pipeline-Level Configuration + +Apply settings to all steps in a pipeline: ```python docker_settings = DockerSettings() + @pipeline(settings={"docker": docker_settings}) -def my_pipeline(): +def my_pipeline() -> None: my_step() ``` -**Step Configuration**: Apply settings to individual steps: +#### Step-Level Configuration + +For more granular control, configure settings for individual steps: ```python @step(settings={"docker": docker_settings}) -def my_step(): +def my_step() -> None: pass ``` -**YAML Configuration**: Use a YAML file for settings: +#### YAML Configuration + +Settings can also be specified in a YAML file: ```yaml settings: @@ -11941,54 +11967,67 @@ steps: ... ``` -#### Specifying Docker Build Options +### Specifying Docker Build Options + To pass build options to the image builder: ```python docker_settings = DockerSettings(build_config={"build_options": {...}}) + @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` -**MacOS ARM Architecture**: Specify the target platform for local Docker caching: +**Note**: For MacOS ARM architecture, specify the target platform: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) ``` -#### Custom Parent Images -You can specify a custom parent image or Dockerfile for more control over the environment. Ensure the image has Python, pip, and ZenML installed. +### Using a Custom Parent Image -**Using a Pre-built Parent Image**: +You can specify a custom pre-built parent image or a Dockerfile for more control over the environment. Ensure the image has Python, pip, and ZenML installed. + +#### Pre-Built Parent Image + +To use a static parent image: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") + @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` -**Skipping Docker Builds**: +To skip Docker builds: ```python -docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag", skip_build=True) -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... +docker_settings = DockerSettings( + parent_image="my_registry.io/image_name:tag", + skip_build=True +) ``` -**Warning**: This feature may lead to unintended behavior; ensure your code files are included in the specified image. For details, refer to [this guide](./use-a-prebuilt-image.md). +**Warning**: This advanced feature may lead to unintended behavior. Ensure that your code files are included in the specified image. + +For further details, refer to the complete documentation available at [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === -### Customize Docker Builds +### Using Docker Images to Run Your Pipeline + +ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in an isolated environment. This section covers how to customize the Docker build process. -ZenML runs pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, it builds Docker images to execute pipelines in an isolated environment. This section covers how to manage the dockerization process effectively. +**Key Points:** +- **Local Execution:** Steps run sequentially in the active Python environment. +- **Remote Execution:** Docker images are created for isolated execution. +- **Customization:** Users can control the dockerization process for their pipelines. -For more details, refer to the documentation on [cloud orchestration](../../user-guide/production-guide/cloud-orchestration.md) and [step operators](../../component-guide/step-operators/step-operators.md). +For more details on orchestrators and step operators, refer to the respective guides. ================================================== @@ -11996,50 +12035,51 @@ For more details, refer to the documentation on [cloud orchestration](../../user # Pipeline Development in ZenML -This section outlines the key components and processes involved in developing pipelines using ZenML. +This section outlines the key aspects of developing pipelines in ZenML, a framework designed for building reproducible machine learning workflows. -## Key Components: -1. **Pipelines**: A series of steps that define the workflow for data processing and model training. -2. **Steps**: Individual tasks within a pipeline, such as data ingestion, preprocessing, training, and evaluation. -3. **Artifacts**: Outputs generated by each step, which can be used as inputs for subsequent steps. +## Key Components -## Development Process: -1. **Define Steps**: Use decorators to define each step in the pipeline. - ```python - @step - def data_ingestion(): - # Code for data ingestion - pass +1. **Pipelines**: Define a sequence of steps (components) that process data and produce outputs. Each pipeline can be parameterized and reused. - @step - def data_preprocessing(data): - # Code for preprocessing - pass - ``` +2. **Components**: Individual units of work within a pipeline, such as data ingestion, preprocessing, model training, and evaluation. Components can be implemented as Python functions or classes. -2. **Create Pipeline**: Combine steps into a pipeline. - ```python - @pipeline - def my_pipeline(): - data = data_ingestion() - processed_data = data_preprocessing(data) - ``` +3. **Data Management**: ZenML integrates with various data storage solutions, allowing for seamless data handling across different stages of the pipeline. -3. **Run Pipeline**: Execute the pipeline using the ZenML CLI or programmatically. - ```python - pipeline_instance = my_pipeline() - pipeline_instance.run() - ``` +4. **Artifact Management**: Outputs from components (artifacts) are tracked and stored, enabling reproducibility and versioning of results. + +5. **Orchestrators**: ZenML supports multiple orchestrators (e.g., Apache Airflow, Kubeflow) for executing pipelines, allowing users to choose based on their infrastructure needs. + +6. **Integrations**: ZenML provides integrations with popular machine learning libraries (e.g., TensorFlow, PyTorch) and tools (e.g., MLflow, S3) to enhance functionality. + +## Example Code Snippet + +```python +from zenml.pipelines import pipeline +from zenml.steps import step + +@step +def data_ingestion(): + # Load and return data + pass + +@step +def model_training(data): + # Train model and return trained model + pass + +@pipeline +def training_pipeline(): + data = data_ingestion() + model = model_training(data) +``` -## Configuration: -- **Parameters**: Customize steps with parameters to control behavior. -- **Environment**: Define execution environments for reproducibility. +## Important Considerations -## Best Practices: -- Modularize steps for reusability. -- Version control artifacts for tracking changes. +- **Reproducibility**: Ensure that all components are designed to be deterministic and that artifacts are versioned. +- **Modularity**: Build components that can be reused across different pipelines to promote efficiency. +- **Testing**: Implement unit tests for components to validate functionality before integration into pipelines. -This summary provides a foundational understanding of pipeline development in ZenML, focusing on the structure, creation, and execution of pipelines while highlighting best practices. +This concise overview provides essential information for developing pipelines using ZenML, focusing on components, data management, and integration capabilities. ================================================== @@ -12049,9 +12089,9 @@ This summary provides a foundational understanding of pipeline development in Ze To run ZenML steps defined in notebook cells remotely (using a remote orchestrator or step operator), the following conditions must be met: -- The cell must contain only Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. -- The cell must not call code from other notebook cells. However, importing functions or classes from Python files is permitted. -- The cell must independently handle all necessary imports, including ZenML imports (e.g., `from zenml import step`), without relying on imports from previous cells. +- The cell can only contain Python code; Jupyter magic commands (`%`) or shell commands (`!`) are not permitted. +- The cell must not call code from other notebook cells. However, functions or classes imported from Python files are allowed. +- The cell must handle all necessary imports independently, including ZenML imports like `from zenml import step`. ================================================== @@ -12059,17 +12099,16 @@ To run ZenML steps defined in notebook cells remotely (using a remote orchestrat ### Summary of Running a Single Step from a Notebook -To execute a single step remotely from a notebook using ZenML, call the step as a Python function. ZenML will create a pipeline with the specified step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining remote steps in notebook cells. +To execute a single step remotely from a notebook using ZenML, you can call the step like a standard Python function. ZenML will create a pipeline with that step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining steps in notebook cells. -#### Code Example +#### Example Code ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC -from typing import Tuple -from typing_extensions import Annotated +from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( @@ -12084,7 +12123,7 @@ def svc_trainer( print(f"Train accuracy: {train_acc}") return model, train_acc -# Prepare training data +# Sample data X_train = pd.DataFrame(...) y_train = pd.Series(...) @@ -12092,7 +12131,10 @@ y_train = pd.Series(...) model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` -This code defines a step for training a Support Vector Classifier (SVC) and demonstrates how to call it directly, resulting in a pipeline execution on the active stack. +### Key Points +- The step can be executed directly in a notebook. +- Ensure to handle limitations when defining steps in notebook cells. +- The example demonstrates training a Support Vector Classifier (SVC) using provided training data. ================================================== @@ -12100,25 +12142,28 @@ This code defines a step for training a Support Vector Classifier (SVC) and demo ### Summary: Running Remote Pipelines from Jupyter Notebooks -ZenML allows the definition and execution of steps and pipelines directly within Jupyter Notebooks. The process involves extracting code from notebook cells and executing it as Python modules within Docker containers for remote execution. +ZenML allows users to define and execute steps and pipelines within Jupyter notebooks remotely. The process involves extracting code from notebook cells and running it as Python modules in Docker containers. **Key Points:** -- **Execution Environment:** Notebook cells must adhere to specific conditions for ZenML to function properly. -- **Documentation Links:** - - [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) - - [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) +- **Execution Environment:** Notebook cells must adhere to specific conditions for ZenML to run them remotely. +- **Limitations:** There are certain limitations when defining steps in notebook cells. Refer to the documentation for details. +- **Single Step Execution:** Users can run individual steps from a notebook. More information is available in the relevant section. + +**Related Documentation:** +- [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) +- [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) -This setup enables efficient remote execution of data workflows while leveraging the interactive capabilities of Jupyter Notebooks. +![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === -### Summary of Configuration File Generation in ZenML +### Summary of ZenML Configuration Template Documentation -To create a configuration file for your pipeline, you can use the `.write_run_configuration_template()` method, which generates a YAML template with all options commented out for customization. +To create a configuration file for your ZenML pipeline, you can autogenerate a YAML template using the `.write_run_configuration_template()` method. This method generates a YAML file with all options commented out, allowing you to select relevant settings. -#### Code Example +#### Example Code ```python from zenml import pipeline @@ -12130,42 +12175,56 @@ def simple_ml_pipeline(parameter: int): simple_ml_pipeline.write_run_configuration_template(path="") ``` -#### Generated YAML Configuration Template +#### Generated YAML Configuration Template Structure The generated YAML template includes various sections, such as: -- **Build and Settings**: Options for pipeline build and Docker settings. -- **Model Metadata**: Fields for model details like name, description, and tags. -- **Parameters**: Optional parameters for the pipeline. -- **Schedule**: Configuration for scheduling pipeline runs. -- **Steps**: Detailed settings for each step in the pipeline, including: - - **Load Data**: Metadata and settings for the data loading step. - - **Train Model**: Metadata and settings for the model training step. +- **Pipeline Settings** + - `build`: Pipeline build configuration. + - `enable_artifact_metadata`: Optional boolean. + - `enable_cache`: Optional boolean. + - `model`: Contains metadata about the model (e.g., `name`, `tags`, `version`). + +- **Parameters** + - `parameters`: Optional mapping for pipeline parameters. + - `run_name`: Optional string for naming the run. + +- **Schedule** + - `schedule`: Configuration for scheduling runs (e.g., `cron_expression`, `catchup`). + +- **Settings** + - **Docker Settings**: Configuration for Docker environment (e.g., `apt_packages`, `parent_image`). + - **Resource Allocation**: CPU and GPU counts, memory specifications. -Each step can include options for enabling artifact metadata, caching, logging, and Docker settings. +- **Steps** + - Each step (e.g., `load_data`, `train_model`) includes: + - Metadata options (e.g., `enable_step_logs`, `experiment_tracker`). + - Model configuration similar to the main model section. + - Docker settings and resource allocations specific to the step. #### Additional Configuration -You can also specify a stack when generating the template by using: +You can also specify a stack when generating the template using: ```python simple_ml_pipeline.write_run_configuration_template(stack=) ``` -This allows for tailored configurations based on the selected stack. +For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-configuration.md === -### Summary: Configuring Runtime Settings in ZenML +# Summary of ZenML Settings Configuration -**Overview**: ZenML allows configuration of runtime settings for stack components and pipelines through a central concept called `BaseSettings`. These settings enable customization of resources, containerization, and stack component-specific configurations. +## Overview +ZenML allows configuration of runtime settings for stack components and pipelines through a central concept called `BaseSettings`. These settings enable customization of resources, containerization, and stack component-specific configurations. -#### Types of Settings +## Types of Settings 1. **General Settings**: Applicable to all ZenML pipelines. - Examples: - `DockerSettings`: Configure Docker settings. - `ResourceSettings`: Specify resource requirements. -2. **Stack-Component-Specific Settings**: Provide runtime configurations for specific stack components, identified by keys in the format `` or `.`. +2. **Stack-Component-Specific Settings**: Provide runtime configurations for specific components. The key format is `` or `.`. - Examples: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` @@ -12176,29 +12235,29 @@ This allows for tailored configurations based on the selected stack. - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` -#### Registration-Time vs Real-Time Settings +## Registration-Time vs Real-Time Settings - **Registration-Time Settings**: Static configurations set during component registration (e.g., `tracking_url` for MLflow). - **Real-Time Settings**: Dynamic configurations that can change with each pipeline run (e.g., `experiment_name`). -Default values can be set during registration, which can be overridden at runtime. - -#### Key Specification for Settings -- Use the correct key format (`` or `.`) to ensure settings are applied to the correct component flavor. If the settings do not match the component flavor, they will be ignored. +Default values can be set during registration but can be overridden at runtime. -#### Example Code Snippets +## Key Specification for Settings +When specifying stack-component-specific settings, use the correct key format. If only the category is provided, ZenML applies settings to the corresponding flavor of the component. If incompatible, the settings will be ignored. -**Python Code**: +## Code Examples +### Python ```python @step(step_operator="nameofstepoperator", settings={"step_operator": {"estimator_args": {"instance_type": "m7g.medium"}}}) def my_step(): ... +# Using the class @step(step_operator="nameofstepoperator", settings={"step_operator": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... ``` -**YAML Configuration**: +### YAML ```yaml steps: my_step: @@ -12209,7 +12268,7 @@ steps: instance_type: m7g.medium ``` -This documentation provides a foundational understanding of configuring runtime settings in ZenML, emphasizing the distinction between general and component-specific settings, as well as their application in both registration and execution contexts. +This documentation provides a comprehensive guide to configuring runtime settings in ZenML, ensuring that users can effectively manage their pipeline configurations. For the latest information, refer to the [up-to-date ZenML documentation](https://docs.zenml.io). ================================================== @@ -12222,8 +12281,10 @@ To extract the configuration used in a completed pipeline run, you can access th from zenml.client import Client pipeline_run = Client().get_pipeline_run() + # General configuration for the pipeline pipeline_run.config + # Configuration for a specific step pipeline_run.steps[].config ``` @@ -12234,15 +12295,14 @@ This allows you to retrieve both the overall pipeline configuration and the conf === File: docs/book/how-to/pipeline-development/use-configuration-files/how-to-use-config.md === -### Configuration File Usage in ZenML +### ZenML Configuration Files -**Best Practices:** -It is recommended to use a YAML configuration file to separate configuration from code, although configurations can also be specified directly in code. +**Overview**: ZenML allows configuration through YAML files, which is recommended for separating configuration from code. -**Applying Configuration:** -Use the `with_options(config_path=)` pattern to apply configurations to a pipeline. +**Configuration Example**: +- A YAML file can specify pipeline parameters and step configurations. -**Example YAML Configuration:** +**YAML Configuration**: ```yaml enable_cache: False @@ -12254,7 +12314,7 @@ steps: enable_cache: False ``` -**Example Python Code:** +**Python Code Example**: ```python from zenml import step, pipeline @@ -12265,13 +12325,14 @@ def load_data(dataset_name: str) -> dict: @pipeline def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) - -if __name__=="__main__": + +if __name__ == "__main__": simple_ml_pipeline.with_options(config_path=)() ``` -**Functionality:** -The above code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to "best_dataset". +**Functionality**: The above code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to `best_dataset`. + +**Note**: For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -12279,13 +12340,13 @@ The above code runs `simple_ml_pipeline` with caching disabled for `load_data` a ### Configuration Hierarchy in ZenML -In ZenML, configuration settings follow a specific hierarchy: +In ZenML, configurations can be set at multiple levels, with specific rules governing their precedence: 1. **Code Configurations**: Override YAML file configurations. 2. **Step-Level Configurations**: Override pipeline-level configurations. -3. **Attribute Merging**: Dictionaries for attributes are merged. +3. **Attribute Merging**: Dictionaries are merged when attributes are involved. -### Example Code +#### Example Code ```python from zenml import pipeline, step @@ -12303,7 +12364,7 @@ def train_model(data: dict) -> None: def simple_ml_pipeline(parameter: int): ... -# Configuration results +# Configuration results after merging train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count=1, memory="2GB" @@ -12311,10 +12372,7 @@ simple_ml_pipeline.configuration.settings["resources"] # -> cpu_count: 2, memory="1GB" ``` -### Key Points - -- Step configurations take precedence over pipeline configurations. -- Resource settings can be defined at both the step and pipeline levels, with the step settings overriding the pipeline settings where applicable. +This example illustrates how ZenML merges configurations, using step settings to override pipeline settings where applicable. For the `train_model` step, the final resource settings reflect both step and pipeline configurations. ================================================== @@ -12322,10 +12380,9 @@ simple_ml_pipeline.configuration.settings["resources"] # Configuration Overview -This document outlines the configuration options available in a YAML file for a ZenML pipeline. Below is a concise summary of key components and their significance. +This documentation outlines the configuration options available in a YAML file for a ZenML pipeline. Below is a summary of key sections and parameters. ## Sample YAML Configuration - ```yaml build: dcd6fafb-c200-4e85-8328-428bef98d804 @@ -12397,20 +12454,19 @@ steps: instance_type: m7g.medium ``` -## Key Configuration Elements +## Key Configuration Parameters -### `enable_XXX` Parameters -These boolean flags control various behaviors: -- `enable_artifact_metadata`: Associates metadata with artifacts. -- `enable_artifact_visualization`: Attaches visualizations of artifacts. -- `enable_cache`: Enables caching. -- `enable_step_logs`: Enables tracking of step logs. +### `enable_XXX` Flags +- **`enable_artifact_metadata`**: Attach metadata to artifacts. +- **`enable_artifact_visualization`**: Attach visualizations of artifacts. +- **`enable_cache`**: Enable caching. +- **`enable_step_logs`**: Enable step logs. ### `build` ID Specifies the UUID of the Docker image to use. If provided, Docker image building is skipped. ### `model` -Defines the ZenML Model used in the pipeline: +Defines the ZenML model for the pipeline. ```yaml model: name: "ModelName" @@ -12420,7 +12476,7 @@ model: ``` ### Pipeline and Step `parameters` -Parameters can be defined at both pipeline and step levels: +Parameters can be defined at both pipeline and step levels, with step-level parameters taking precedence. ```yaml parameters: gamma: 0.01 @@ -12430,58 +12486,69 @@ steps: parameters: gamma: 0.001 ``` -The step-level parameters take precedence. -### `run_name` -Specifies a unique name for the run. Avoid static values to prevent conflicts. +### Setting the `run_name` +The run name must be unique for each execution. +```yaml +run_name: +``` ### Stack Component Runtime Settings -Configurations for Docker and resource settings: +Settings for Docker and resources can be specified under `settings`. ```yaml settings: docker: requirements: - pandas - resources: - cpu_count: 2 - gpu_count: 1 - memory: "4Gb" + +resources: + cpu_count: 2 + gpu_count: 1 + memory: "4Gb" ``` ### Step-Specific Configuration Certain configurations apply only at the step level: -- `experiment_tracker`: Name of the experiment tracker for the step. -- `step_operator`: Name of the step operator for the step. -- `outputs`: Configuration for output artifacts, including materializer source paths. +- **`experiment_tracker`**: Name of the experiment tracker for the step. +- **`step_operator`**: Name of the step operator for the step. +- **`outputs`**: Configuration for output artifacts, including materializer source paths. -This summary retains critical technical information while providing a clear overview of the configuration options available for ZenML pipelines. +### Hooks +Specify `failure_hook_source` and `success_hook_source` for handling step outcomes. + +This summary captures the essential configuration details and structure for setting up a ZenML pipeline using a YAML file. For further information, refer to the respective sections in the documentation. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/README.md === -ZenML allows for easy configuration and execution of pipelines using YAML files at runtime. These configuration files enable users to set parameters, manage caching behavior, and configure stack components. Key topics include: +ZenML allows for the configuration and execution of pipelines using YAML files, enabling runtime adjustments for parameters, caching behavior, and stack components. Key topics include: + +- **Configurable Options**: Details on what can be configured in a pipeline. +- **Configuration Hierarchy**: Structure and precedence of configuration settings. +- **Template Generation**: Instructions for autogenerating a template YAML file. -- **What Can Be Configured**: Details on configurable elements in ZenML. -- **Configuration Hierarchy**: Explanation of the structure and precedence of configurations. -- **Autogenerate a Template YAML File**: Instructions for creating a base YAML configuration file automatically. +For further details, refer to the following sections: +- [What can be configured](what-can-be-configured.md) +- [Configuration hierarchy](configuration-hierarchy.md) +- [Autogenerate a template YAML file](autogenerate-a-template-yaml-file.md) -For more information, refer to the linked sections on each topic. +This streamlined approach simplifies the management of pipeline configurations in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md === -### Summary: Creating Pipeline Variants for Local Development and Production in ZenML +### Summary: Creating Pipeline Variants in ZenML -When developing ZenML pipelines, it's useful to create different variants for local development and production to facilitate quick iterations while maintaining a robust production setup. This can be achieved through: +ZenML allows you to create different variants of your pipelines for local development and production, enhancing development speed while maintaining production integrity. You can achieve this through: 1. **Configuration Files** 2. **Code Implementation** 3. **Environment Variables** #### 1. Using Configuration Files -ZenML supports YAML configuration files to specify pipeline and step settings. For example, a development configuration might look like this: +You can specify pipeline configurations using YAML files. For example, a development configuration might look like this: ```yaml enable_cache: False @@ -12492,7 +12559,7 @@ steps: enable_cache: False ``` -To apply this configuration: +To apply this configuration in your pipeline: ```python from zenml import step, pipeline @@ -12509,7 +12576,7 @@ if __name__ == "__main__": ml_pipeline.with_options(config_path="path/to/config.yaml")() ``` -Separate configuration files can be created for development (`config_dev.yaml`) and production (`config_prod.yaml`). +You can create separate files for development (`config_dev.yaml`) and production (`config_prod.yaml`). #### 2. Implementing Variants in Code You can also define pipeline variants directly in your code: @@ -12532,10 +12599,10 @@ if __name__ == "__main__": ml_pipeline(is_dev=is_dev) ``` -This method uses a boolean flag to switch between variants. +This method uses a boolean flag to switch between environments. #### 3. Using Environment Variables -Environment variables can dictate which configuration to use: +You can determine which configuration to use based on environment variables: ```python import os @@ -12548,15 +12615,14 @@ Run the pipeline with: - `ZENML_ENVIRONMENT=dev python run.py` - `ZENML_ENVIRONMENT=prod python run.py` -#### Development Variant Considerations -For faster iteration in development, consider: -- Smaller datasets -- Local execution stack -- Reduced training epochs -- Decreased batch size -- Smaller base models +### Development Variant Considerations +For development variants, optimize for faster iteration: +- Use smaller datasets +- Specify a local execution stack +- Reduce training epochs and batch size +- Use smaller base models -Example configuration: +Example configuration for development: ```yaml parameters: @@ -12579,128 +12645,120 @@ def ml_pipeline(is_dev: bool = False): train_model(epochs=epochs, batch_size=batch_size) ``` -By creating these variants, you can efficiently test and debug locally while ensuring a comprehensive setup for production, enhancing your development workflow. +By creating these variants, you can efficiently test and debug locally while maintaining a robust production setup. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md === -### Summary of ZenML Pipeline Cleanliness - -**Objective**: Maintain a clean pipeline environment during development. +### Summary of ZenML Documentation on Keeping Pipeline Runs Clean -#### 1. Running Locally -To avoid cluttering a shared server, disconnect and run a local server: -```bash -zenml login --local -``` -Reconnect to the remote server with: -```bash -zenml login -``` - -#### 2. Pipeline Runs -- **Unlisted Runs**: Create runs without associating them with a pipeline: - ```python - pipeline_instance.run(unlisted=True) - ``` - These runs won't appear on the pipeline's dashboard. - -- **Deleting Pipeline Runs**: To delete a specific run: - ```bash - zenml pipeline runs delete - ``` - To delete all runs from the last 24 hours: - ```python - #!/usr/bin/env python3 - import datetime - from zenml.client import Client +#### Overview +This documentation provides strategies to maintain a clean development environment while working with ZenML pipelines, preventing clutter in the dashboard and server. - def delete_recent_pipeline_runs(): - zc = Client() - time_filter = (datetime.datetime.utcnow() - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") - recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") - for run in recent_runs: - zc.delete_pipeline_run(run.id) - print(f"Deleted {len(recent_runs)} pipeline runs.") +#### Key Strategies - if __name__ == "__main__": - delete_recent_pipeline_runs() - ``` +1. **Run Locally**: + - To avoid cluttering a shared server, disconnect and run a local server: + ```bash + zenml login --local + ``` + - Reconnect to the remote server with: + ```bash + zenml login + ``` -#### 3. Pipelines -- **Deleting Pipelines**: Remove unnecessary pipelines: - ```bash - zenml pipeline delete - ``` +2. **Pipeline Runs**: + - **Unlisted Runs**: Create runs without associating them with a pipeline: + ```python + pipeline_instance.run(unlisted=True) + ``` + - **Deleting Runs**: + - Delete a specific run: + ```bash + zenml pipeline runs delete + ``` + - Delete all runs from the last 24 hours: + ```python + import datetime + from zenml.client import Client + + def delete_recent_pipeline_runs(): + zc = Client() + time_filter = (datetime.datetime.utcnow() - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") + recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") + for run in recent_runs: + zc.delete_pipeline_run(run.id) + print(f"Deleted {len(recent_runs)} pipeline runs.") + ``` -- **Unique Pipeline Names**: Assign custom names to differentiate runs: - ```python - training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") - training_pipeline() - ``` +3. **Pipelines**: + - **Deleting Pipelines**: + ```bash + zenml pipeline delete + ``` + - **Unique Pipeline Names**: Assign unique names to runs: + ```python + training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") + training_pipeline() + ``` -#### 4. Models -To delete a model: -```bash -zenml model delete -``` +4. **Models**: + - To delete a model: + ```bash + zenml model delete + ``` -#### 5. Artifacts -- **Pruning Artifacts**: Remove unreferenced artifacts: - ```bash - zenml artifact prune - ``` - Use flags `--only-artifact` or `--only-metadata` for specific deletions. +5. **Artifacts**: + - **Pruning Artifacts**: + ```bash + zenml artifact prune + ``` + - Control deletion behavior with `--only-artifact` and `--only-metadata` flags. -#### 6. Cleaning Environment -For a complete reset of local data: -```bash -zenml clean -``` -Use `--local` to delete local files related to the active stack. This command does not affect server data. +6. **Cleaning Environment**: + - Use `zenml clean` to delete all local pipelines, runs, and artifacts: + ```bash + zenml clean --local + ``` -By utilizing these strategies, you can keep your ZenML dashboard organized and focused on relevant runs. +By following these practices, users can maintain a clean and organized pipeline dashboard, focusing on relevant runs for their projects. For more details, refer to the full ZenML documentation. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === -### Develop Locally - -This section outlines best practices for developing pipelines locally, enabling faster iteration and cost-effective testing. Developers typically work with a smaller subset of data or synthetic data. ZenML supports local development, guiding users through the process of building pipelines locally before deploying them on more powerful remote hardware. +# Develop Locally -![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +This section outlines best practices for developing pipelines locally, allowing for faster iteration and cost-effective testing. It is common to work with a smaller subset of data or synthetic data during local development. ZenML supports this approach, enabling users to develop locally and later push and run pipelines on more powerful remote hardware. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.md === -### Summary: Inspecting a Finished Pipeline Run and Its Outputs +### Summary of ZenML Documentation on Inspecting Pipeline Runs #### Overview -After a pipeline run is completed, users can access various information programmatically, including loading artifacts, accessing metadata, and inspecting the lineage of pipeline runs. +This documentation covers how to inspect finished pipeline runs and their outputs in ZenML, including accessing artifacts, metadata, and lineage of runs. #### Pipeline Hierarchy -- **Pipelines** have multiple **Runs**. -- Each **Run** consists of multiple **Steps**. -- Each **Step** produces multiple **Artifacts**. - -```mermaid -flowchart LR - pipelines -->|1:N| runs - runs -->|1:N| steps - steps -->|1:N| artifacts -``` +- **Structure**: Pipelines have a 1-to-N relationship with runs, runs with steps, and steps with artifacts. + + ```mermaid + flowchart LR + pipelines -->|1:N| runs + runs -->|1:N| steps + steps -->|1:N| artifacts + ``` #### Fetching Pipelines -- **Get a specific pipeline**: +- **Get a Specific Pipeline**: ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` -- **List all pipelines**: +- **List All Pipelines**: - **Python**: ```python pipelines = Client().list_pipelines() @@ -12711,22 +12769,22 @@ flowchart LR ``` #### Pipeline Runs -- **Get all runs of a pipeline**: +- **Get All Runs**: ```python runs = pipeline_model.runs ``` -- **Get the last run**: +- **Get Last Run**: ```python last_run = pipeline_model.last_run # OR: pipeline_model.runs[0] ``` -- **Execute a pipeline and get the latest run**: +- **Execute Pipeline and Get Latest Run**: ```python run = training_pipeline() ``` -- **Get a specific run**: +- **Fetch Specific Run**: ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` @@ -12734,7 +12792,7 @@ flowchart LR #### Run Information - **Status**: ```python - status = run.status # Possible states: initialized, failed, completed, running, cached + status = run.status # States: initialized, failed, completed, running, cached ``` - **Configuration**: @@ -12749,28 +12807,28 @@ flowchart LR orchestrator_url = run_metadata["orchestrator_url"].value ``` -#### Steps -- **Get all steps of a run**: +#### Steps and Artifacts +- **Access Steps**: ```python - steps = run.steps - step = run.steps["first_step"] + steps = run.steps # Get all steps + step = run.steps["first_step"] # Get specific step ``` -#### Artifacts -- **Access output artifacts**: +- **Inspect Output Artifacts**: ```python - output = step.outputs["output_name"] # or step.output for single output - my_pytorch_model = output.load() + output = step.outputs["output_name"] # Access by name + output = step.output # If single output + my_pytorch_model = output.load() # Load artifact ``` -- **Fetch artifacts directly**: +- **Fetch Artifacts Directly**: ```python artifact = Client().get_artifact('iris_dataset') output = artifact.versions['2022'] ``` -#### Artifact Information -- **Metadata**: +#### Metadata and Visualizations +- **Artifact Metadata**: ```python output_metadata = output.run_metadata storage_size_in_bytes = output_metadata["storage_size"].value @@ -12778,29 +12836,31 @@ flowchart LR - **Visualizations**: ```python - output.visualize() + output.visualize() # Show visualizations in Jupyter ``` #### Fetching Information During Run Execution -To fetch information during a running pipeline: -```python -from zenml import get_step_context -from zenml.client import Client +- **Access Previous Runs**: + ```python + from zenml import get_step_context + from zenml.client import Client -@step -def my_step(): - current_run_name = get_step_context().pipeline_run.name - current_run = Client().get_pipeline_run(current_run_name) - previous_run = current_run.pipeline.runs[1] # index 0 is current -``` + @step + def my_step(): + current_run_name = get_step_context().pipeline_run.name + current_run = Client().get_pipeline_run(current_run_name) + previous_run = current_run.pipeline.runs[1] # Get previous run + ``` #### Code Example -Combining concepts into a simple script: +This example demonstrates loading a model from a pipeline: + ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split +from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @@ -12808,24 +12868,26 @@ from zenml.client import Client @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) - return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) + X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) + return X_train, X_test, y_train, y_test @step -def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[SVC, "trained_model"], Annotated[float, "training_acc"]]: - model = SVC(gamma=gamma).fit(X_train, y_train) - return model, model.score(X_train, y_train) +def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: + model = SVC(gamma=gamma) + model.fit(X_train.to_numpy(), y_train.to_numpy()) + return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() - svc_trainer(X_train=X_train, y_train=y_train, gamma=gamma) + svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` -This summary captures the essential technical details and code snippets for inspecting pipeline runs and their outputs in ZenML. +This summary captures the essential technical details and code snippets necessary for understanding how to inspect pipeline runs in ZenML. ================================================== @@ -12833,16 +12895,15 @@ This summary captures the essential technical details and code snippets for insp ### ZenML Step Retry Configuration -ZenML offers a built-in mechanism to automatically retry steps upon failure, useful for handling intermittent issues. This is particularly beneficial when using GPU-backed hardware, where resource availability may fluctuate. +ZenML includes a built-in mechanism to automatically retry steps upon failure, useful for handling transient errors, such as resource availability on GPU-backed hardware. You can configure retries using three parameters: -#### Retry Parameters -You can configure the following parameters for step retries: -- **max_retries:** Maximum retry attempts on failure. +- **max_retries:** Maximum retry attempts. - **delay:** Initial delay (in seconds) before the first retry. - **backoff:** Multiplier for the delay after each retry. -#### Using the @step Decorator -You can define the retry configuration in your step as shown below: +#### Example Usage with @step Decorator + +You can define retry configurations directly in your step using the `@step` decorator: ```python from zenml.config.retry_config import StepRetryConfig @@ -12856,16 +12917,11 @@ from zenml.config.retry_config import StepRetryConfig ) def my_step() -> None: raise Exception("This is a test exception") - -steps: - my_step: - retry: - max_retries: 3 - delay: 10 - backoff: 2 ``` -**Note:** Infinite retries are not supported. Setting `max_retries` to a high value or omitting it will still enforce an internal limit to prevent infinite loops. It's advisable to set a reasonable `max_retries` based on your use case. +#### Important Notes + +- Infinite retries are not supported. Setting `max_retries` to a large value or omitting it will still enforce an internal maximum to avoid infinite loops. It’s advisable to set a reasonable `max_retries` based on your use case. ### Related Documentation - [Failure/Success Hooks](use-failure-success-hooks.md) @@ -12875,12 +12931,11 @@ steps: === File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === -### Summary of Fan-in and Fan-out Patterns in ZenML +### Summary of ZenML Fan-in and Fan-out Patterns Documentation -**Overview:** -The fan-out/fan-in pattern is a pipeline architecture that splits a single step into multiple parallel operations (fan-out) and consolidates the results back into a single step (fan-in). This pattern enhances parallel processing, distributed workloads, and data transformations. +**Overview**: The fan-in and fan-out pattern is a pipeline architecture used for parallel processing. It involves a single step that splits into multiple parallel operations (fan-out) and then consolidates the results into a single step (fan-in). This is beneficial for tasks like distributed workloads and data transformations. -**Example Code:** +**Example Code**: ```python from zenml import step, get_step_context, pipeline from zenml.client import Client @@ -12897,10 +12952,8 @@ def process_step(input_data: str) -> str: def combine_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) - processed_results = {step_info.name: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} - print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) @pipeline(enable_cache=False) @@ -12912,21 +12965,23 @@ def fan_out_fan_in_pipeline(parallel_count: int) -> None: fan_out_fan_in_pipeline(parallel_count=8) ``` -**Use Cases:** -- Parallel data processing -- Distributed model training -- Ensemble methods -- Batch processing -- Data validation across multiple sources -- Hyperparameter tuning +**Key Points**: +- **Fan-out**: Enables parallel processing, enhancing resource utilization. +- **Fan-in**: Aggregates results from parallel steps, useful for various applications such as: + - Parallel data processing + - Distributed model training + - Ensemble methods + - Batch processing + - Data validation + - Hyperparameter tuning -**Important Notes:** -- The fan-in step requires using the ZenML Client to query results from parallel steps. -- Limitations: - 1. Steps may run sequentially if the orchestrator does not support parallel execution. - 2. The number of steps must be predetermined; dynamic step creation is not supported. +**Limitations**: +1. Steps may run sequentially if the orchestrator does not support parallel execution (e.g., local orchestrator). +2. The number of steps must be predetermined; dynamic step creation is not supported. + +**Important Note**: When implementing the fan-in step, results from previous parallel steps must be queried using the ZenML Client, as direct result passing is not allowed. -This pattern is beneficial for optimizing resource utilization and managing complex workflows effectively. +For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -12949,7 +13004,7 @@ latest_run = p.last_run first_run = p[0] ``` -This code demonstrates how to access the latest run and the first run of a specified pipeline. +This code snippet demonstrates how to access the latest run and the first run of a specified pipeline. ================================================== @@ -12957,7 +13012,7 @@ This code demonstrates how to access the latest run and the first run of a speci # Tagging Pipeline Runs -You can specify tags for your pipeline runs in three ways: +You can specify tags for your pipeline runs in the following ways: 1. **Configuration File**: ```yaml @@ -12966,14 +13021,14 @@ You can specify tags for your pipeline runs in three ways: - tag_in_config_file ``` -2. **Using the `@pipeline` Decorator**: +2. **Code**: + - Using the `@pipeline` decorator: ```python @pipeline(tags=["tag_on_decorator"]) def my_pipeline(): ... ``` - -3. **Using `with_options` Method**: + - Using the `with_options` method: ```python my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) ``` @@ -12986,64 +13041,70 @@ When you run the pipeline, tags from all specified locations will be merged and ### Summary of Hyperparameter Tuning with ZenML -This documentation describes how to perform hyperparameter tuning using ZenML through a simple pipeline that implements a basic grid search for different learning rates. The process involves two main steps: `train_step` and `selection_step`. +**Overview**: This documentation describes how to perform hyperparameter tuning using ZenML, specifically through a grid search method across a single hyperparameter dimension (learning rate). The process involves training models with different learning rates and selecting the best-performing model. -#### Key Components: +**Key Components**: -1. **Train Step**: - - Trains a model using a specified learning rate. - - Returns the trained model. - ```python - @step - def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: - return ... # Train model with learning rate - ``` +1. **Steps**: + - **`train_step`**: Trains a model using a specified learning rate. + - **`selection_step`**: Evaluates trained models to determine the best hyperparameter based on performance. -2. **Selection Step**: - - Evaluates trained models to determine the best performing hyperparameters. - - Queries all artifacts produced by previous steps using ZenML Client. - ```python - @step - def selection_step(step_prefix: str, output_name: str) -> None: - run_name = get_step_context().pipeline_run.name - run = Client().get_pipeline_run(run_name) - trained_models_by_lr = {} - for step_name, step_info in run.steps.items(): - if step_name.startswith(step_prefix): - model = step_info.outputs[output_name][0].load() - lr = step_info.config.parameters["learning_rate"] - trained_models_by_lr[lr] = model - for lr, model in trained_models_by_lr.items(): - ... # Evaluate models - ``` - -3. **Pipeline Definition**: - - Constructs the pipeline to execute multiple training steps followed by the selection step. - ```python - @pipeline - def my_pipeline(step_count: int) -> None: - after = [] - for i in range(step_count): - train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") - after.append(f"train_step_{i}") - selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) +2. **Pipeline**: + - **`my_pipeline`**: Executes multiple `train_step` calls for a range of learning rates and then invokes `selection_step` to find the optimal model. - my_pipeline(step_count=4) - ``` +**Code Example**: +```python +from typing import Annotated +from sklearn.base import ClassifierMixin +from zenml import step, pipeline, get_step_context +from zenml.client import Client + +model_output_name = "my_model" + +@step +def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: + return ... # Train model with learning rate + +@step +def selection_step(step_prefix: str, output_name: str) -> None: + run_name = get_step_context().pipeline_run.name + run = Client().get_pipeline_run(run_name) + trained_models_by_lr = {} + + for step_name, step_info in run.steps.items(): + if step_name.startswith(step_prefix): + model = step_info.outputs[output_name][0].load() + lr = step_info.config.parameters["learning_rate"] + trained_models_by_lr[lr] = model + + for lr, model in trained_models_by_lr.items(): + ... # Evaluate models to find the best one + +@pipeline +def my_pipeline(step_count: int) -> None: + after = [] + for i in range(step_count): + train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") + after.append(f"train_step_{i}") + + selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) + +my_pipeline(step_count=4) +``` -#### Important Notes: -- The current limitation is that a variable number of artifacts cannot be passed into a step programmatically; hence, the selection step must query artifacts using the ZenML Client. -- Additional resources include example implementations for hyperparameter tuning, such as randomized search and selection of the best model based on defined metrics. +**Important Notes**: +- The current implementation requires querying artifacts from previous steps via the ZenML Client, as passing a variable number of artifacts programmatically is not supported. +- Additional resources include example implementations for randomized hyperparameter search and selection of the best model based on defined metrics, available in the ZenML GitHub repository. -For further exploration, refer to the [E2E example](https://github.com/zenml-io/zenml/tree/main/examples/e2e) in the ZenML repository. +For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === -# Naming Pipeline Runs +### Summary of Pipeline Run Naming in ZenML -Pipeline run names are automatically generated based on the current date and time, as shown in the example: +When a pipeline run is executed, it is automatically assigned a name based on the current date and time, as shown below: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. @@ -13058,19 +13119,21 @@ training_pipeline = training_pipeline.with_options( training_pipeline() ``` -Run names must be unique. To ensure uniqueness, compute the run name dynamically or use placeholders that ZenML will replace. Placeholders can be set in the `@pipeline` decorator or the `pipeline.with_options` function. Standard placeholders include: +Run names must be unique. To ensure uniqueness, dynamically compute the run name or include placeholders that ZenML will replace. Placeholders can be set in the `@pipeline` decorator or in the `pipeline.with_options` function. Standard placeholders available are: - `{date}`: current date (e.g., `2024_11_27`) - `{time}`: current time in UTC format (e.g., `11_07_09_326492`) -Example of using placeholders in a custom run name: +Example of using placeholders: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name_{experiment_name}_{date}_{time}" ) training_pipeline() -``` +``` + +This setup allows for organized and traceable pipeline runs. ================================================== @@ -13078,9 +13141,9 @@ training_pipeline() # Reference Environment Variables in ZenML Configurations -ZenML allows referencing environment variables in both code and configuration files using the placeholder syntax `${ENV_VARIABLE_NAME}`. +ZenML allows referencing environment variables in configurations using the placeholder syntax `${ENV_VARIABLE_NAME}`. -## In-Code Example +## In-code Example ```python from zenml import step @@ -13098,40 +13161,44 @@ extra: combined_value: prefix_${ENV_VAR}_suffix ``` -This feature enhances the flexibility of configurations by allowing dynamic values based on the environment. +This feature enhances flexibility in both code and configuration files. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/configuring-a-pipeline-at-runtime.md === -### Runtime Configuration of a Pipeline +### Runtime Configuration of a Pipeline Run -To run a pipeline with a different configuration, use the `pipeline.with_options` method. You can configure options in two ways: +To configure a pipeline at runtime, use the `pipeline.with_options` method. There are two primary ways to do this: -1. Explicitly: +1. **Explicit Configuration**: ```python - with_options(steps="trainer", parameters={"param1": 1}) + pipeline.with_options(steps={"trainer": {"parameters": {"param1": 1}}}) ``` -2. By passing a YAML file: + +2. **Using a YAML File**: ```python - with_options(config_file="path_to_yaml_file") + pipeline.with_options(config_file="path_to_yaml_file") ``` For more details on these options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/README.md). -**Exception:** To trigger a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). +**Exception**: If triggering a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). + +For additional resources, see the documentation on [using configuration files](../../use-configuration-files/README.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md === -## Accessing Secrets in ZenML Steps +# Accessing Secrets in ZenML Steps -ZenML secrets consist of **key-value pairs** stored securely in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. For details on configuring and creating secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). +ZenML secrets consist of **key-value pairs** securely stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. To learn about configuring and creating secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). -You can access secrets within your steps using the ZenML `Client` API, enabling you to use secrets for API queries without hard-coding access keys. +Secrets can be accessed within steps using the ZenML `Client` API, enabling secure API queries without hard-coded access keys. + +## Example Code -### Example Code ```python from zenml import step from zenml.client import Client @@ -13147,24 +13214,23 @@ def secret_loader() -> None: ) ``` -### Additional Resources -- [Creating and managing secrets](../../interact-with-secrets.md) +## Additional Resources +- [Create and manage secrets](../../interact-with-secrets.md) - [Secrets backend in ZenML](../../../getting-started/deploying-zenml/secret-management.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md === -### Summary of Parameterization in ZenML Pipelines +### Summary of ZenML Parameterization and Caching Documentation -#### Overview -Steps and pipelines in ZenML can be parameterized similarly to Python functions. Parameters can be either **artifacts** (outputs from other steps) or **explicit parameters** (values provided during invocation). +**Overview**: Steps and pipelines in ZenML can be parameterized like standard Python functions. This allows for flexible configuration and behavior customization. #### Step Parameters -- **Artifacts**: Outputs from previous steps, used to share data. -- **Parameters**: Explicit values that configure step behavior, requiring JSON-serializable types via Pydantic. For non-JSON-serializable objects (e.g., NumPy arrays), use [External Artifacts](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline). +- **Artifacts**: Outputs from other steps, used to share data within a pipeline. +- **Parameters**: Explicitly provided values that are not dependent on other steps. Only JSON-serializable values (via Pydantic) can be passed as parameters. For non-serializable objects (e.g., NumPy arrays), use External Artifacts. -#### Example Code +**Example**: ```python from zenml import step, pipeline @@ -13179,9 +13245,9 @@ def my_pipeline(): ``` #### YAML Configuration -Parameters can be defined in a YAML file, allowing for easy updates without changing the code: +Parameters can be defined in a YAML file, allowing for easy updates without modifying code. -**config.yaml** +**YAML Example**: ```yaml parameters: environment: production @@ -13191,7 +13257,7 @@ steps: input_2: 42 ``` -**Python Code** +**Python Example**: ```python from zenml import step, pipeline @@ -13203,15 +13269,14 @@ def my_step(input_1: int, input_2: int) -> None: def my_pipeline(environment: str): ... -if __name__ == "__main__": +if __name__=="__main__": my_pipeline.with_options(config_paths="config.yaml")() ``` -#### Conflicts in Configuration -Conflicts may arise if parameters are defined in both the YAML file and the code. ZenML will notify you of such conflicts. +#### Conflict Handling +Conflicts may arise when parameters are defined in both the YAML file and the code. ZenML will notify users of such conflicts. -**Example of Conflict** -**config.yaml** +**Conflict Example**: ```yaml parameters: some_param: 24 @@ -13220,34 +13285,31 @@ steps: parameters: input_2: 42 ``` - -**Python Code** ```python @pipeline def my_pipeline(some_param: int): - my_step(input_1=42, input_2=43) - -if __name__ == "__main__": - my_pipeline(23) + my_step(input_1=42, input_2=43) # Conflict with config ``` #### Caching Behavior -- **Parameters**: A step is cached only if all parameter values match previous executions. -- **Artifacts**: A step is cached only if all input artifacts match previous executions. If upstream steps are not cached, the step will always execute. +- **Parameter Caching**: A step is cached only if all parameter values match previous executions. +- **Artifact Caching**: A step is cached only if all input artifacts match previous executions. If upstream steps are not cached, the step will execute again. -#### Additional Resources +### Additional Resources - [Use configuration files to set parameters](use-pipeline-step-parameters.md) -- [How caching works and how to control it](control-caching-behavior.md) +- [How caching works and how to control it](control-caching-behavior.md) + +This summary encapsulates the key points regarding parameterization, configuration, conflict resolution, and caching in ZenML, ensuring clarity and conciseness. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-pipelines-asynchronously.md === -### Summary: Running Pipelines Asynchronously +### Summary: Running Pipelines Asynchronously in ZenML -By default, pipelines run synchronously, meaning the terminal displays logs during the pipeline execution. To run pipelines asynchronously, you can configure the orchestrator either globally or at the pipeline level. +By default, ZenML pipelines run synchronously, meaning the terminal displays logs in real-time as the pipeline executes. To enable asynchronous execution, you can configure the orchestrator in two ways: -1. **Global Configuration**: Set `synchronous=False` in the orchestrator settings. +1. **Pipeline Decorator**: Set `synchronous=False` in the pipeline decorator. ```python from zenml import pipeline @@ -13256,29 +13318,31 @@ By default, pipelines run synchronously, meaning the terminal displays logs duri ... ``` -2. **YAML Configuration**: Modify the YAML config file to set the orchestrator to run asynchronously. +2. **YAML Configuration**: Modify the orchestrator settings in a YAML config file. ```yaml settings: orchestrator.: synchronous: false ``` -For more details on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). +For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). + +This allows for background execution of pipeline runs, improving workflow efficiency. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/using-a-custom-step-invocation-id.md === -## Custom Step Invocation ID in ZenML +### Summary of Custom Step Invocation ID in ZenML -When invoking a ZenML step in a pipeline, a unique **invocation ID** is assigned. This ID can be used to define the execution order of steps or to fetch information about the invocation post-execution. +When invoking a ZenML step in a pipeline, each step is assigned a unique **invocation ID**. This ID is essential for defining the execution order of steps and for fetching information about the invocation post-execution. -### Key Points: -- The first invocation of a step uses the step name as its ID (e.g., `my_step`). -- Subsequent invocations append a suffix (_2, _3, etc.) to ensure uniqueness (e.g., `my_step_2`). -- A custom invocation ID can be specified by passing an `id` parameter, which must be unique across all invocations in the pipeline. +#### Key Points: +- The first invocation of a step uses the step name as the invocation ID (e.g., `my_step`). +- Subsequent invocations append a suffix (e.g., `my_step_2`, `my_step_3`) to ensure uniqueness. +- Custom invocation IDs can be specified by passing an `id` parameter, which must be unique across all invocations within the pipeline. -### Example Code: +#### Example Code: ```python from zenml import pipeline, step @@ -13293,16 +13357,21 @@ def example_pipeline(): my_step(id="my_custom_invocation_id") # Custom ID ``` +This structure allows for flexible step management and tracking within ZenML pipelines. + ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === -### Summary of ZenML Pipeline Composition +### Summary of ZenML Pipeline Composition Documentation -ZenML facilitates the reuse of steps between pipelines by allowing the composition of pipelines, which helps avoid code duplication. +**Overview**: ZenML enables the reuse of steps between pipelines to minimize code duplication by composing pipelines. -#### Example Code +**Key Points**: +- **Pipeline Composition**: You can call one pipeline within another, allowing for shared functionality. +- **Visibility**: Only the parent pipeline will be visible in the ZenML dashboard. +**Example Code**: ```python from zenml import pipeline @@ -13315,14 +13384,12 @@ def data_loading_pipeline(mode: str): def training_pipeline(): training_data = data_loading_pipeline(mode="train") model = training_step(data=training_data) - test_data = data_loading_pipeline(mode="test") - evaluation_step(model=model, data=test_data) + evaluation_step(model=model, data=data_loading_pipeline(mode="test")) ``` -In this example, `data_loading_pipeline` is called within `training_pipeline`, effectively treating it as a step in the latter. Only the parent pipeline appears on the dashboard. For triggering a pipeline from another, refer to the advanced usage documentation. +**Additional Information**: For details on triggering a pipeline from another, refer to the [advanced usage documentation](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). -#### Additional Resources -- Learn more about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). +**Learn More**: For more about orchestrators, see the [orchestrators guide](../../../component-guide/orchestrators/orchestrators.md). ================================================== @@ -13330,15 +13397,13 @@ In this example, `data_loading_pipeline` is called within `training_pipeline`, e ### Summary of ZenML Failure and Success Hooks Documentation -#### Overview -Hooks in ZenML allow actions to be performed after the execution of a step, useful for notifications, logging, or resource cleanup. There are two types of hooks: +**Overview**: ZenML provides hooks to execute actions after a step's execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: `on_failure` and `on_success`. + +#### Hook Definitions - **`on_failure`**: Triggered when a step fails. - **`on_success`**: Triggered when a step succeeds. -#### Defining Hooks -Hooks are defined as callback functions accessible within the pipeline repository. The `on_failure` hook can accept a `BaseException` argument to access the exception that caused the failure. - -**Example:** +**Example**: ```python from zenml import step @@ -13367,26 +13432,17 @@ def my_pipeline(...): **Note**: Step-level hooks take precedence over pipeline-level hooks. #### Accessing Step Information -Inside hooks, use `get_step_context()` to access step and pipeline run information. - -**Example:** +Use `get_step_context()` within hooks to access step and pipeline run information: ```python -from zenml import step, get_step_context +from zenml import get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) - print("Step failed!") - -@step(on_failure=on_failure) -def my_step(some_parameter: int = 1): - raise ValueError("My exception") ``` -#### Using the Alerter Component -Hooks can utilize the Alerter component to send notifications. - -**Example:** +#### Alerter Integration +You can use the Alerter component to notify users of step outcomes: ```python from zenml import get_step_context, Client @@ -13395,7 +13451,7 @@ def on_failure(): Client().active_stack.alerter.post(f"{step_name} just failed!") ``` -**Standard Hooks:** +**Standard Hooks**: ```python from zenml.hooks import alerter_success_hook, alerter_failure_hook @@ -13405,15 +13461,12 @@ def my_step(...): ``` #### OpenAI ChatGPT Failure Hook -This hook generates potential fixes for exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and your API key stored in a ZenML secret. - -**Installation:** +This hook generates potential fixes for exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and your API key stored in a ZenML secret: ```shell zenml integration install openai zenml secret create openai --api_key= ``` - -**Usage:** +Usage in a step: ```python from zenml.integration.openai.hooks import openai_chatgpt_alerter_failure_hook @@ -13422,64 +13475,77 @@ def my_step(...): ... ``` -This hook can provide suggestions to help resolve issues in your code. For GPT-4 users, use `openai_gpt4_alerter_failure_hook`. - ### Conclusion -ZenML hooks provide a flexible way to manage actions based on step outcomes, including notifications and error handling, enhancing the robustness of your pipelines. +ZenML hooks enhance pipeline functionality by allowing post-execution actions, integrating with notification systems, and leveraging AI for error handling. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-an-individual-step.md === -## Summary of ZenML Step Execution +# Summary of ZenML Step Execution Documentation -### Running an Individual Step -To execute a single step in ZenML, call the step like a standard Python function. ZenML will create an unlisted pipeline to run the step on the active stack, which can be viewed in the "Runs" tab of the dashboard. +## Running an Individual Step +To execute a single step in ZenML, call the step as a normal Python function. ZenML will create an unlisted pipeline for this step, which won't be associated with any pipeline but can be viewed in the "Runs" tab of the dashboard. ### Example Code ```python from zenml import step import pandas as pd -from sklearn.base import ClassifierMixin from sklearn.svm import SVC +from sklearn.base import ClassifierMixin +from typing import Tuple +from typing_extensions import Annotated @step(step_operator="") -def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: +def svc_trainer( + X_train: pd.DataFrame, + y_train: pd.Series, + gamma: float = 0.001, +) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: + """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc +# Prepare training data X_train = pd.DataFrame(...) y_train = pd.Series(...) + +# Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` -### Running the Step Function Directly +## Running the Step Function Directly To bypass ZenML and run the step function directly, use the `entrypoint(...)` method: +### Example Code ```python +X_train = pd.DataFrame(...) +y_train = pd.Series(...) + model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` ### Default Behavior -To set the default behavior to run steps without ZenML, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will make calling `svc_trainer(...)` execute the underlying function directly. +To set the default behavior to run steps without ZenML, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will allow direct calls to the step function without involving the ZenML stack. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/delete-a-pipeline.md === -### Deleting Pipelines and Pipeline Runs +# Summary of ZenML Pipeline Deletion Documentation +## Deleting a Pipeline To delete a pipeline, use either the CLI or the Python SDK: -#### CLI +### CLI Command ```shell zenml pipeline delete ``` -#### Python SDK +### Python SDK ```python from zenml.client import Client @@ -13488,6 +13554,7 @@ Client().delete_pipeline() **Note:** Deleting a pipeline does not remove associated runs or artifacts. +### Deleting Multiple Pipelines For deleting multiple pipelines with the same prefix, use the following Python script: ```python @@ -13500,28 +13567,24 @@ target_pipeline_ids = [p.id for p in pipelines_list.items] if input("Do you really want to delete these pipelines? (y/n): ").lower() == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) - print("Deletion complete") -else: - print("Deletion cancelled") ``` -### Deleting Pipeline Runs - +## Deleting a Pipeline Run To delete a pipeline run, use the CLI or the Python SDK: -#### CLI +### CLI Command ```shell zenml pipeline runs delete ``` -#### Python SDK +### Python SDK ```python from zenml.client import Client Client().delete_pipeline_run() -``` +``` -This documentation provides the necessary commands and scripts for deleting pipelines and their runs efficiently. +This documentation provides essential commands and scripts for managing the deletion of pipelines and their runs in ZenML. ================================================== @@ -13529,7 +13592,7 @@ This documentation provides the necessary commands and scripts for deleting pipe # Control Execution Order of Steps in ZenML -ZenML determines the execution order of pipeline steps based on data dependencies. For example, in the pipeline below, `step_3` relies on outputs from `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. +ZenML determines the execution order of pipeline steps based on data dependencies. For example, in the following pipeline, `step_3` depends on the outputs of `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. ```python from zenml import pipeline @@ -13541,7 +13604,7 @@ def example_pipeline(): step_3(step_1_output, step_2_output) ``` -To enforce specific execution constraints, you can use non-data dependencies by specifying invocation IDs. For a single step, use `my_step(after="other_step")`, or for multiple steps, use a list: `my_step(after=["other_step", "other_step_2"])`. Refer to the [documentation](using-a-custom-step-invocation-id.md) for details on invocation IDs. +To enforce specific execution order constraints, you can use non-data dependencies by passing invocation IDs. For instance, to ensure `step_1` runs after `step_2`, use: ```python from zenml import pipeline @@ -13553,33 +13616,22 @@ def example_pipeline(): step_3(step_1_output, step_2_output) ``` -In this example, `step_1` will only start after `step_2` has completed. +This modification ensures that `step_1` only starts after `step_2` has completed. For more details on using custom invocation IDs, refer to the [documentation](using-a-custom-step-invocation-id.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/schedule-a-pipeline.md === -### Summary: Scheduling Pipelines in ZenML +### Summary of ZenML Scheduling Documentation + +This documentation covers how to set, pause, and stop schedules for pipelines in ZenML. Note that scheduling support varies by orchestrator. #### Supported Orchestrators -Not all orchestrators support scheduling. The following orchestrators do support it: -- AirflowOrchestrator ✅ -- AzureMLOrchestrator ✅ -- DatabricksOrchestrator ✅ -- HyperAIOrchestrator ✅ -- KubeflowOrchestrator ✅ -- KubernetesOrchestrator ✅ -- VertexOrchestrator ✅ - -Orchestrators that do not support scheduling: -- LocalOrchestrator ⛔️ -- LocalDockerOrchestrator ⛔️ -- SagemakerOrchestrator ⛔️ -- Various SkypilotOrchestrators ⛔️ -- TektonOrchestrator ⛔️ +- **Supported**: Airflow, AzureML, Databricks, HyperAI, Kubeflow, Kubernetes, Vertex. +- **Not Supported**: Local, LocalDocker, Sagemaker, Skypilot (all variants), Tekton. #### Setting a Schedule -To set a schedule for a pipeline, use the `Schedule` class with either cron expressions or human-readable notations: +You can set a schedule using cron expressions or human-readable notations. Here’s a concise code example: ```python from zenml.config.schedule import Schedule @@ -13590,9 +13642,9 @@ from datetime import datetime def my_pipeline(...): ... -# Cron expression example +# Using cron expression schedule = Schedule(cron_expression="5 14 * * 3") -# Human-readable example +# Using human-readable notation schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) @@ -13602,27 +13654,29 @@ my_pipeline() For more scheduling options, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule -The method to pause or stop a scheduled run depends on the orchestrator. For instance, in Kubeflow, you can use the UI for this purpose. Always refer to your orchestrator's documentation for specific instructions. +The method to pause or stop a scheduled run depends on the orchestrator. For instance, in Kubeflow, you can use the UI for this purpose. Users must consult their specific orchestrator's documentation for detailed steps. -**Important Note:** ZenML schedules the run, but managing the lifecycle of the schedule is the user's responsibility. Running a pipeline with a schedule multiple times creates separate scheduled pipelines with unique names. +**Important Note**: ZenML only schedules runs; managing the lifecycle of these schedules is the user's responsibility. Running a pipeline with a schedule multiple times creates unique scheduled pipelines. #### Additional Resources -For more information on orchestrators, visit [orchestrators.md](../../../component-guide/orchestrators/orchestrators.md). +For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/step-output-typing-and-annotation.md === -### Summary of Step Output Typing and Annotation in ZenML +### Summary of ZenML Step Output Typing and Annotation -**Step Outputs Storage**: Outputs from steps are stored in an artifact store. Annotate and name them for clarity. +#### Step Outputs +- Outputs from steps are stored in an artifact store. Annotate and name them for clarity. #### Type Annotations - Type annotations are optional but beneficial: - **Type Validation**: Ensures correct input types from upstream steps. - - **Better Serialization**: With annotations, ZenML selects the appropriate materializer for outputs. Custom materializers can be created if built-in options are insufficient. + - **Serialization**: With annotations, ZenML selects the appropriate materializer for outputs. Custom materializers can be created if built-in options are inadequate. -**Warning**: The built-in `CloudpickleMaterializer` is not production-ready due to compatibility issues across Python versions and potential security risks. +#### Materialization Warning +- The built-in `CloudpickleMaterializer` can serialize any object but is not production-ready due to compatibility issues across Python versions. It may also pose security risks by allowing malicious file uploads. #### Code Examples ```python @@ -13638,22 +13692,23 @@ def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` -- Set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True` to enforce type annotations. - -#### Tuple vs Multiple Outputs -- A step with a tuple literal in the return statement is treated as having multiple outputs. Otherwise, it is a single output of type `Tuple`. +- To enforce type annotations, set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True`. ZenML will raise exceptions for missing annotations. -```python -@step -def my_step() -> Tuple[int, int]: - return 0, 1 # Multiple outputs -``` +#### Tuple vs. Multiple Outputs +- ZenML distinguishes between single output artifacts and multiple outputs based on the return statement: + - A tuple literal (e.g., `return (1, 2)`) indicates multiple outputs. + - Other cases are treated as a single output of type `Tuple`. #### Output Naming -- Default names: `output` for single outputs, `output_0`, `output_1`, etc., for multiple outputs. -- Custom names can be set using `Annotated`: - +- Default output names: + - Single output: `output` + - Multiple outputs: `output_0`, `output_1`, etc. +- Custom names can be set using the `Annotated` type annotation: ```python +from typing_extensions import Annotated +from typing import Tuple +from zenml import step + @step def square_root(number: int) -> Annotated[float, "custom_output_name"]: return number ** 0.5 @@ -13666,55 +13721,48 @@ def divide(a: int, b: int) -> Tuple[ return a // b, a % b ``` -- Without custom names, artifacts are named based on the pipeline and step names. +- If no custom names are provided, artifacts are named using the format `{pipeline_name}::{step_name}::output`. -### Additional Resources -- For more on output annotation: [Return Multiple Outputs](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md) -- For custom data types: [Handle Custom Data Types](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) +#### Additional Resources +- For more on output annotation, see [return-multiple-outputs-from-a-step.md](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md). +- For custom data types, refer to [handle-custom-data-types.md](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === -### ZenML Caching Behavior Summary - -By default, ZenML caches steps in pipelines when the code and parameters remain unchanged. - -#### Caching Control +### ZenML Caching Behavior -- **Step-Level Caching**: - - Use `@step(enable_cache=True)` to enable caching. - - Use `@step(enable_cache=False)` to disable caching, which overrides pipeline-level settings. +By default, ZenML caches steps in pipelines when code and parameters remain unchanged. -- **Pipeline-Level Caching**: - - Use `@pipeline(enable_cache=True)` to enable caching for the entire pipeline. +#### Caching Configuration -#### Example Code -```python -@step(enable_cache=True) -def load_data(parameter: int) -> dict: - ... +- **At Step Level**: You can control caching behavior using the `@step` decorator: + ```python + @step(enable_cache=True) # Caches data loading + def load_data(parameter: int) -> dict: + ... -@step(enable_cache=False) -def train_model(data: dict) -> None: - ... + @step(enable_cache=False) # Overrides caching for model training + def train_model(data: dict) -> None: + ... -@pipeline(enable_cache=True) -def simple_ml_pipeline(parameter: int): - ... -``` + @pipeline(enable_cache=True) # Sets caching for the pipeline + def simple_ml_pipeline(parameter: int): + ... + ``` -#### Dynamic Configuration -Caching settings can be modified after initial definition: -```python -my_step.configure(enable_cache=...) -my_pipeline.configure(enable_cache=...) -``` +- **Dynamic Configuration**: Caching settings can be modified after initial setup: + ```python + my_step.configure(enable_cache=...) + my_pipeline.configure(enable_cache=...) + ``` -#### Additional Information -For YAML configuration, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). +#### Important Notes +- Caching occurs only when code and parameters are unchanged. +- For YAML configuration, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). -**Note**: Caching occurs only when both code and parameters remain unchanged. +This summary provides essential details on caching behavior in ZenML, including configuration at both step and pipeline levels, and the ability to modify settings dynamically. ================================================== @@ -13722,10 +13770,9 @@ For YAML configuration, refer to the [configuration files documentation](../../p ### Summary of ZenML Pipeline Documentation -**Overview**: Building pipelines in ZenML involves using the `@step` and `@pipeline` decorators to define steps and combine them into a pipeline. - -#### Code Example +**Overview**: Building pipelines in ZenML is straightforward by using the `@step` and `@pipeline` decorators. +**Code Example**: ```python from zenml import pipeline, step @@ -13742,125 +13789,123 @@ def train_model(data: dict) -> None: @pipeline def simple_ml_pipeline(): - dataset = load_data() - train_model(dataset) + train_model(load_data()) +``` -# Execute the pipeline +**Execution**: Call the pipeline with: +```python simple_ml_pipeline() ``` -#### Execution and Logging -- When the pipeline is executed, it logs the run to the ZenML dashboard, which requires a ZenML server (local or remote) to view the Directed Acyclic Graph (DAG) and associated metadata. +**Logging**: The pipeline execution is logged in the ZenML dashboard, which requires a ZenML server (local or remote). + +**Dashboard Features**: Users can view the Directed Acyclic Graph (DAG) and associated metadata. -#### Additional Features -For advanced interactions with your pipeline, refer to the following topics: -- Configure pipeline/step parameters -- Name and annotate step outputs -- Control caching behavior -- Customize step invocation IDs -- Name pipeline runs -- Use failure/success hooks +**Advanced Features**: Additional functionalities include: +- Configuring pipeline/step parameters +- Naming and annotating step outputs +- Controlling caching behavior +- Customizing step invocation IDs +- Naming pipeline runs +- Using failure/success hooks - Hyperparameter tuning -- Attach and fetch metadata within steps and during pipeline composition -- Enable or disable log storing -- Access secrets in a step +- Attaching and fetching metadata within steps and pipelines +- Managing log storage +- Accessing secrets in steps -For more details, consult the respective documentation links provided in the original text. +For more details on these advanced features, refer to the respective documentation links provided in the original text. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md === -### How to Configure the Server Environment +### Configure the Server Environment The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). +**Note:** This documentation is an older version. For the latest information, please visit the [up-to-date URL](https://docs.zenml.io). + ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md === -### Handling Dependency Conflicts with ZenML +# Handling Conflicting Dependencies in ZenML This documentation addresses common issues with conflicting dependencies when using ZenML alongside other libraries. ZenML is designed to be stack- and integration-agnostic, which can lead to dependency conflicts. -#### Installing Dependencies -Use the command: -```bash -zenml integration install ... -``` -to install dependencies for specific integrations. After installation, verify that requirements are met by running: -```bash -zenml integration list -``` -Look for a green tick symbol next to your desired integrations. +## Installing Dependencies +Use the command `zenml integration install ...` to install dependencies for specific integrations. After installation, verify that all requirements are met by running `zenml integration list` and checking for a green tick symbol. -#### Suggestions for Resolving Dependency Conflicts +## Suggestions for Resolving Dependency Conflicts -1. **Use `pip-compile` for Reproducibility**: - Utilize `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt` file. For more details, refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management). +### Use `pip-compile` +Utilize `pip-compile` from the `pip-tools` package to create a static `requirements.txt` file for reproducibility across environments. For more details, refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management). -2. **Run `pip check`**: - Execute `pip check` to identify any dependency conflicts in your environment. +### Use `pip check` +Run `pip check` to verify compatibility of your environment's dependencies. This will list any conflicts, which may affect your project. -3. **Known Issues**: - ZenML has strict dependency requirements. For example, it requires `click~=8.0.3`. Using a version greater than 8.0.3 may cause issues. +### Known Dependency Issues +ZenML has strict version requirements for some integrations. For example, it requires `click~=8.0.3` for its CLI, and using a version greater than 8.0.3 may lead to unexpected behaviors. -#### Manual Dependency Management -You can bypass ZenML's integration installation and manually install dependencies, though this is not recommended. The command `zenml integration install ...` effectively runs `pip install ...` for the integration's dependencies. +## Manual Dependency Installation +You can manually install dependencies instead of using ZenML's integration installation, though this is not recommended. The `zenml integration install ...` command effectively runs a `pip install ...` for the specified integration dependencies. -To manually install dependencies, use: +To export integration requirements, use: ```bash -# Export requirements to a file +# Export to a file zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME -# Print requirements to console +# Print to console zenml integration export-requirements INTEGRATION_NAME ``` -After modifying the requirements, if using a remote orchestrator, update the `DockerSettings` object accordingly (details [here](../../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md)). +Adjust these requirements as needed. If using a remote orchestrator, update the `DockerSettings` object accordingly to ensure compatibility. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/README.md === -# Configure Python Environments +# Summary of ZenML Environment Configuration -ZenML deployments involve multiple environments for managing dependencies and configurations. The environments include: +## Overview +ZenML deployments involve multiple environments, including client, server, and execution environments, each with specific roles in managing dependencies and configurations. -1. **Client Environment (Runner Environment)**: - - Where ZenML pipelines are compiled (e.g., in `run.py`). - - Types include local development, CI runner, ZenML Pro runner, and runner images orchestrated by the ZenML server. - - Use a package manager (e.g., `pip`, `poetry`) to manage dependencies, including the ZenML package and required integrations. - - Key steps to start a pipeline: - 1. Compile an intermediate representation using the `@pipeline` function. - 2. Create or trigger pipeline and step build environments if running remotely. - 3. Trigger a run in the orchestrator. - - The `@pipeline` function is called only in this environment, focusing on compile time rather than execution time. +### Client Environment (Runner Environment) +- **Purpose**: Compiles ZenML pipelines, typically in a `run.py` script. +- **Types**: + - Local development + - CI runner in production + - ZenML Pro runner + - Runner image orchestrated by ZenML server +- **Key Steps**: + 1. Compile pipeline using the `@pipeline` function. + 2. Create/trigger pipeline and step build environments if running remotely. + 3. Trigger a run in the orchestrator. +- **Note**: The `@pipeline` function is called only in this environment, focusing on compile-time logic. -2. **ZenML Server Environment**: - - A FastAPI application managing pipelines and metadata, including the ZenML Dashboard. - - Install dependencies during ZenML deployment, especially for custom integrations. Refer to the [server configuration guide](./configure-the-server-environment.md) for more details. +### ZenML Server Environment +- **Function**: A FastAPI application managing pipelines and metadata, including the ZenML Dashboard. +- **Dependency Management**: Install dependencies during ZenML deployment, primarily for custom integrations. -3. **Execution Environments**: - - When running locally, the client, server, and execution environments are the same. - - For remote pipelines, ZenML transfers code to the remote orchestrator by building Docker images (execution environments). - - ZenML configures Docker images starting from a base image containing ZenML and Python, adding pipeline dependencies. Follow the [containerize your pipeline guide](../../../how-to/customize-docker-builds/README.md) for Docker image configuration. +### Execution Environments +- **Local Execution**: No distinct execution environment; client, server, and execution are the same. +- **Remote Execution**: ZenML builds Docker images (execution environments) to transfer code to the remote orchestrator. +- **Image Configuration**: Start with a base image containing ZenML and Python, then add pipeline dependencies. Refer to the guide on [containerizing your pipeline](../../../how-to/customize-docker-builds/README.md) for details. -4. **Image Builder Environment**: - - Execution environments are created locally using the Docker client, requiring installation and permissions. - - ZenML provides image builders, a specialized stack component, to build and push Docker images in a different environment. - - If no image builder is configured, ZenML uses the local image builder for consistency. +### Image Builder Environment +- **Default Behavior**: Execution environments are created locally using the local Docker client, requiring Docker installation. +- **Image Builders**: ZenML provides image builders, a stack component for building and pushing Docker images in a specialized environment. If no image builder is configured, the local image builder is used for consistency. -For more details on specific components, refer to the respective guides linked throughout the documentation. +This summary encapsulates the essential details of configuring Python environments in ZenML, focusing on the roles and management of different environments involved in the deployment process. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md === -### Summary of Distributed Training with Hugging Face's Accelerate in ZenML +### Summary: Distributed Training with Hugging Face's Accelerate in ZenML ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing the use of multiple GPUs or nodes. -#### Using 🤗 Accelerate in Steps +#### Using Accelerate in ZenML Steps To enable distributed execution in training steps, use the `run_with_accelerate` decorator: ```python @@ -13877,23 +13922,24 @@ def training_pipeline(some_param: int, ...): training_step(some_param, ...) ``` -The decorator accepts arguments similar to the `accelerate launch` CLI command. Key arguments include: -- `num_processes`: Number of processes for distributed training. +The decorator accepts arguments similar to the `accelerate launch` CLI command. Common arguments include: +- `num_processes`: Number of processes for training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. - `mixed_precision`: Set mixed precision mode ('no', 'fp16', 'bf16'). #### Important Usage Notes 1. Use `run_with_accelerate` directly on steps with the '@' syntax. -2. Use keyword arguments for calling steps. -3. Misuse raises a `RuntimeError` with guidance. +2. Only keyword arguments are supported for accelerated steps. +3. Misuse raises a `RuntimeError` with usage guidance. #### Environment Configuration -To run Accelerate, ensure the following Docker settings: +To run steps with Accelerate, ensure your environment is properly configured: + +1. **Specify a CUDA-enabled Parent Image**: + Example using a CUDA-enabled PyTorch image: -1. **CUDA-enabled Parent Image**: ```python - from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @@ -13904,6 +13950,8 @@ To run Accelerate, ensure the following Docker settings: ``` 2. **Add Accelerate as a Requirement**: + Ensure Accelerate is installed: + ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", @@ -13916,11 +13964,14 @@ To run Accelerate, ensure the following Docker settings: ``` #### Multi-GPU Training -ZenML's Accelerate integration allows training with multiple GPUs on one or more nodes, enhancing performance for large datasets or complex models. Ensure your training step is wrapped with `run_with_accelerate` and configure the necessary arguments. +ZenML's Accelerate integration supports training on multiple GPUs, enhancing performance for large datasets or complex models. Key steps include: +- Wrapping the training step with `run_with_accelerate`. +- Configuring Accelerate arguments (e.g., `num_processes`, `multi_gpu`). +- Ensuring compatibility of training code with distributed training. -For further assistance, connect via [Slack](https://zenml.io/slack). +For assistance, connect with ZenML support via Slack. -This integration helps scale training processes effectively while maintaining ZenML's structured pipeline benefits. +By leveraging Accelerate, ZenML users can efficiently scale training processes while maintaining the benefits of structured pipelines. ================================================== @@ -13929,10 +13980,10 @@ This integration helps scale training processes effectively while maintaining Ze # Summary of GPU Resource Management in ZenML ## Overview -ZenML allows scaling machine learning pipelines to the cloud, leveraging GPU-backed hardware to enhance performance. This involves configuring resource settings and ensuring the environment is CUDA-enabled. +ZenML allows scaling machine learning pipelines to the cloud, leveraging GPU-backed hardware for enhanced performance. This involves specifying resource requirements and ensuring the container environment is properly configured. ## Specifying Resource Requirements -To allocate resources for specific steps, use `ResourceSettings`: +To allocate resources for resource-intensive steps, use `ResourceSettings`: ```python from zenml.config import ResourceSettings @@ -13943,7 +13994,7 @@ def training_step(...) -> ...: # train a model ``` -For orchestrators like Skypilot that do not support `ResourceSettings`, use specific orchestrator settings: +For orchestrators like Skypilot that do not support `ResourceSettings`, use orchestrator-specific settings: ```python from zenml import step @@ -13956,38 +14007,34 @@ def training_step(...) -> ...: # train a model ``` -Refer to each orchestrator's documentation for resource specification support. - ## CUDA-Enabled Container Configuration -To utilize GPU capabilities, ensure your container is CUDA-enabled: +To utilize GPU capabilities, ensure the container is CUDA-enabled: 1. **Specify a CUDA-enabled parent image**: + Example for PyTorch: -```python -from zenml import pipeline -from zenml.config import DockerSettings - -docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") + ```python + from zenml import pipeline + from zenml.config import DockerSettings -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` + docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") -2. **Add ZenML as a pip requirement**: + @pipeline(settings={"docker": docker_settings}) + def my_pipeline(...): + ... + ``` -```python -docker_settings = DockerSettings( - parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", - requirements=["zenml==0.39.1", "torchvision"] -) + For TensorFlow, use `tensorflow/tensorflow:latest-gpu`. -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` +2. **Add ZenML as a pip requirement**: + Example: -Choose images carefully to avoid compatibility issues between local and cloud environments. + ```python + docker_settings = DockerSettings( + parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", + requirements=["zenml==0.39.1", "torchvision"] + ) + ``` ## Resetting CUDA Cache To avoid GPU cache issues, reset the CUDA cache between steps: @@ -14006,92 +14053,100 @@ def training_step(...): # train a model ``` -Use this function judiciously to avoid impacting other processes using the same GPU. - ## Multi-GPU Training ZenML supports training across multiple GPUs on a single node. To implement this: +- Create a script/function for model training logic that runs in parallel across GPUs. +- Call this function from within the ZenML step. -- Create a script for model training that runs in parallel across GPUs. -- Call this script from within the ZenML step. +For further assistance, connect with the ZenML community on Slack. -For assistance, connect with the ZenML community on Slack. +## Additional Resources +- Refer to orchestrator documentation for specific resource support. +- Ensure the orchestrator environment has permissions to pull necessary Docker images. -This summary captures the essential technical details for configuring and utilizing GPU resources in ZenML, ensuring efficient execution of machine learning pipelines. +This summary captures the essential technical details for managing GPU resources in ZenML, allowing for effective cloud-based machine learning pipeline execution. ================================================== === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === -### Summary: Creating an External Integration and Contributing to ZenML +# Creating an External Integration for ZenML + +## Overview +ZenML aims to streamline the MLOps landscape by providing numerous integrations and allowing users to implement custom stack components. This guide is for those who want to contribute their integrations to the ZenML codebase. -ZenML aims to organize the MLOps landscape by providing numerous integrations with popular tools. This guide outlines how to create a custom integration for ZenML to share with the community. +## Steps to Create an Integration -#### Step 1: Plan Your Integration -Identify the categories your integration belongs to by referring to the [ZenML component categories](../../component-guide/README.md). Note that an integration can span multiple categories (e.g., cloud integrations like AWS, GCP, Azure). +### Step 1: Plan Your Integration +Identify the categories your integration belongs to by referring to the ZenML component guide. Note that one integration can belong to multiple categories. -#### Step 2: Create Stack Component Flavors -Develop individual stack component flavors for each selected category. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: +### Step 2: Create Stack Component Flavors +Develop individual stack component flavors corresponding to the selected categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` -Ensure ZenML is initialized at the root of your repository to resolve the flavor class correctly. Verify the registration with: +Ensure ZenML is initialized at the root of your repository for proper resolution. -```shell -zenml orchestrator flavor list -``` +### Step 3: Create an Integration Class +1. **Clone the Repository**: Clone the main ZenML repository and set up your local environment. +2. **Create Integration Directory**: Create a new folder in `src/zenml/integrations/` for your integration, structured as follows: -Refer to the [extensibility documentation](../../component-guide/README.md) for more details. +``` +/src/zenml/integrations/ + / + ├── artifact-stores/ + ├── flavors/ + └── __init__.py +``` -#### Step 3: Create an Integration Class -1. **Clone the ZenML Repository**: Set up your local environment by following the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). -2. **Create Integration Directory**: Structure your integration within `src/zenml/integrations//`, including subdirectories for artifact stores and flavors. -3. **Define Integration Name**: Add your integration name in `zenml/integrations/constants.py`: +3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: - ```python - EXAMPLE_INTEGRATION = "" - ``` +```python +EXAMPLE_INTEGRATION = "" +``` 4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, subclass the `Integration` class: - ```python - from zenml.integrations.constants import - from zenml.integrations.integration import Integration - from zenml.stack import Flavor +```python +from zenml.integrations.constants import +from zenml.integrations.integration import Integration +from zenml.stack import Flavor - class ExampleIntegration(Integration): - NAME = - REQUIREMENTS = [""] +class ExampleIntegration(Integration): + NAME = + REQUIREMENTS = [""] - @classmethod - def flavors(cls) -> List[Type[Flavor]]: - from zenml.integrations. import - return [] + @classmethod + def flavors(cls) -> List[Type[Flavor]]: + from zenml.integrations. import + return [] - ExampleIntegration.check_installation() - ``` +ExampleIntegration.check_installation() +``` - Check the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for reference. +Refer to the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for an example. 5. **Import the Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. -#### Step 4: Create a Pull Request -Submit a [PR](https://github.com/zenml-io/zenml/compare) to ZenML for review by core maintainers. Thank you for contributing! +### Step 4: Create a Pull Request +Submit a PR to the ZenML repository and await review from core maintainers. + +## Conclusion +By following these steps, you can successfully create and contribute an integration to ZenML, enhancing its capabilities within the MLOps ecosystem. ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === -# Contributing to ZenML +# Contribute to ZenML -Thank you for considering contributing to ZenML! +Thank you for considering contributing to ZenML! We welcome contributions such as new features, documentation improvements, integrations, or bug reports. ## How to Contribute -We welcome contributions such as new features, documentation improvements, integrations, or bug reports. For detailed guidelines on contributing, including adding custom integrations, refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). - -Your contributions are greatly appreciated! +For detailed guidelines on contributing, including adding custom integrations, refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). This guide outlines best practices and conventions followed by ZenML. ================================================== @@ -14099,25 +14154,27 @@ Your contributions are greatly appreciated! ### ZenML Server Upgrade Guide -#### Overview -Upgrading your ZenML server varies based on deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. It's recommended to upgrade promptly after new versions are released for improvements and fixes. +This documentation outlines how to upgrade your ZenML server based on different deployment methods. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). -#### Upgrade Methods +#### General Upgrade Best Practices +- Upgrade promptly after a new version release to benefit from improvements and fixes. +- Review the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. -##### Docker -1. **Check Data Persistence**: Ensure your data is stored on persistent storage or an external MySQL instance. Consider backing up before upgrading. -2. **Delete Existing Container**: +#### Upgrade via Docker +1. **Check Data Persistence**: Ensure data is stored on persistent storage or an external MySQL instance. Consider backing up data. +2. **Delete the Existing Container**: ```bash docker ps # Find your container ID - docker stop - docker rm + docker stop # Stop the container + docker rm # Remove the container ``` 3. **Deploy New Version**: ```bash docker run -it -d -p 8080:8080 --name zenmldocker/zenml-server: ``` + Find available versions [here](https://hub.docker.com/r/zenmldocker/zenml-server/tags). -##### Kubernetes with Helm +#### Upgrade via Kubernetes with Helm 1. **Pull Latest Helm Chart**: ```bash git clone https://github.com/zenml-io/zenml.git @@ -14125,7 +14182,7 @@ Upgrading your ZenML server varies based on deployment method. Always refer to t cd src/zenml/zen_server/deploy/helm/ ``` 2. **Reuse or Extract Values**: - If you have a `custom-values.yaml` from the previous installation, use it. If not, extract values: + - Use your existing `custom-values.yaml`, or extract values: ```bash helm -n get values zenml-server > custom-values.yaml ``` @@ -14136,37 +14193,42 @@ Upgrading your ZenML server varies based on deployment method. Always refer to t > **Note**: Avoid changing the container image tag in the Helm chart unless necessary, as it may lead to compatibility issues. -#### Important Notes +#### Important Considerations - **Downgrading**: Not supported and may cause unexpected behavior. - **Python Client Version**: Should match the server version for compatibility. +This summary provides essential steps and considerations for upgrading your ZenML server across different deployment methods. + ================================================== === File: docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md === # Best Practices for Upgrading ZenML +## Overview +This document outlines best practices for upgrading your ZenML server and code to ensure a smooth transition. + ## Upgrading Your Server ### Data Backups - **Database Backup**: Always back up your MySQL database before upgrading to allow rollback if needed. -- **Automated Backups**: Set up daily automated backups using services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. +- **Automated Backups**: Set up daily automated backups using managed services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. ### Upgrade Strategies -- **Staged Upgrade**: For large organizations, use two ZenML server instances (old and new) to migrate services gradually. +- **Staged Upgrade**: Use two ZenML server instances (old and new) to migrate services incrementally. - **Team Coordination**: Coordinate upgrade timing among teams to minimize disruption. -- **Separate ZenML Servers**: Use dedicated servers for different teams to allow flexible upgrade schedules. ZenML Pro supports multi-tenancy for this purpose. +- **Separate ZenML Servers**: For teams needing different upgrade schedules, consider using dedicated ZenML server instances. ### Minimizing Downtime - **Upgrade Timing**: Schedule upgrades during low-activity periods. -- **Avoid Mid-Pipeline Upgrades**: Be cautious of upgrades that may interrupt long-running pipelines. +- **Avoid Mid-Pipeline Upgrades**: Be cautious with upgrades that may interrupt long-running pipelines. ## Upgrading Your Code ### Testing and Compatibility -- **Local Testing**: Test your code locally after upgrading (`pip install zenml --upgrade`) to check for compatibility. -- **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. Utilize ZenML's [extensive test suite](https://github.com/zenml-io/zenml/tree/main/tests) as a reference. -- **Artifact Compatibility**: Be cautious with pickle-based materializers. Test loading older artifacts with the new version: +- **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines to check compatibility. +- **End-to-End Testing**: Develop simple end-to-end tests to ensure compatibility with your pipeline code. Utilize ZenML's [extensive test suite](https://github.com/zenml-io/zenml/tree/main/tests). +- **Artifact Compatibility**: Be cautious with pickle-based materializers. Use version-agnostic methods for critical artifacts. Load older artifacts with: ```python from zenml.client import Client @@ -14176,14 +14238,14 @@ loaded_artifact = artifact.load() ``` ### Dependency Management -- **Python Version**: Ensure your Python version is compatible with the new ZenML version. Refer to the [installation guide](../../getting-started/installation.md). -- **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version, as older versions may no longer be supported. Review the [release notes](https://github.com/zenml-io/zenml/releases). +- **Python Version**: Ensure compatibility of your Python version with the ZenML version. Refer to the [installation guide](../../getting-started/installation.md). +- **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version, as older versions may no longer be supported. ### Handling API Changes - **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for breaking changes and new syntax. - **Migration Scripts**: Use available [migration scripts](migration-guide/migration-guide.md) for database schema changes. -By following these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server. Adapt these guidelines to your specific environment and infrastructure. +By following these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server and code. Adapt these guidelines to your specific environment and infrastructure. ================================================== @@ -14191,13 +14253,13 @@ By following these best practices, you can minimize risks and ensure a smoother # Best Practices for Using ZenML Server in Production -This guide outlines best practices for deploying ZenML server in production environments, focusing on autoscaling, performance optimization, database scaling, ingress setup, monitoring, and backup strategies. +This guide outlines best practices for setting up a ZenML server in production environments, moving beyond initial testing setups. ## Autoscaling Replicas -To handle larger and longer-running pipelines, set up autoscaling based on your deployment environment: +To handle larger, longer-running pipelines, enable autoscaling based on your deployment environment: ### Kubernetes with Helm -Enable autoscaling in the Helm chart: +Use the following configuration in your Helm chart: ```yaml autoscaling: enabled: true @@ -14207,13 +14269,13 @@ autoscaling: ``` ### ECS (AWS) -1. Access the ECS console. -2. Select your ZenML service. -3. Click "Update Service" and enable autoscaling in the "Service auto scaling - optional" section. +1. Navigate to your service in the ECS console. +2. Click "Update Service." +3. Enable autoscaling and set min/max tasks. ### Cloud Run (GCP) 1. Go to the Cloud Run console. -2. Select your ZenML service and click "Edit & Deploy new Revision." +2. Click "Edit & Deploy new Revision." 3. Set minimum and maximum instances in the "Revision auto-scaling" section. ### Docker Compose @@ -14228,12 +14290,12 @@ Increase server performance by adjusting thread pool size: zenml: threadPoolSize: 100 ``` -Set `ZENML_SERVER_THREAD_POOL_SIZE` for other deployments, and adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. +Set `ZENML_SERVER_THREAD_POOL_SIZE` for other deployments. Adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. ## Scaling the Backing Database -Monitor and scale your database based on: -- **CPU Utilization**: Scale if consistently above 50%. -- **Freeable Memory**: Scale if below 100-200 MB. +Monitor your database for scaling needs based on: +- **CPU Utilization:** Above 50% consistently indicates a need for scaling. +- **Freeable Memory:** Below 100-200 MB may require scaling. ## Setting Up Ingress/Load Balancer Securely expose your ZenML server: @@ -14254,28 +14316,30 @@ Use Application Load Balancers as per [AWS documentation](https://docs.aws.amazo Utilize Cloud Load Balancing following [GCP documentation](https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless). ### Docker Compose -Set up an NGINX reverse proxy for routing. +Set up an NGINX reverse proxy to route traffic. ## Monitoring -Monitor your ZenML server using appropriate tools: +Implement monitoring based on your deployment: ### Kubernetes with Helm -Use Prometheus and Grafana. Example query for CPU utilization: +Use Prometheus and Grafana. A sample query for CPU utilization: ``` sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) ``` ### ECS -Utilize [CloudWatch integration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html) for metrics. +Utilize [CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html) for metrics like CPU and memory utilization. ### Cloud Run -Use [Cloud Monitoring integration](https://cloud.google.com/run/docs/monitoring) for metrics. +Use [Cloud Monitoring](https://cloud.google.com/run/docs/monitoring) for metrics visibility. ## Backups -Implement a backup strategy to protect critical data: +Establish a backup strategy to protect critical data: - Automated backups with a retention period (e.g., 30 days). -- Periodic exports to external storage (e.g., S3, GCS). -- Manual backups before server upgrades. +- Periodic data exports to external storage (e.g., S3, GCS). +- Manual backups before server upgrades. + +For further details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== @@ -14283,42 +14347,41 @@ Implement a backup strategy to protect critical data: # Manage Your ZenML Server -This section provides best practices for upgrading your ZenML server, tips for production use, and troubleshooting guidance. It includes recommended upgrade steps and migration guides for specific version transitions. +This section provides best practices for upgrading and using the ZenML server in production, along with troubleshooting tips. It includes recommended upgrade steps and migration guides for transitioning between specific versions. ### Key Points: -- **Upgrading ZenML Server**: Follow the recommended steps for a smooth upgrade process. -- **Production Use**: Tips for effectively utilizing ZenML in a production environment. -- **Troubleshooting**: Guidance for resolving common issues. -- **Migration Guides**: Instructions for moving between certain ZenML versions. +- **Upgrading ZenML Server**: Follow the recommended procedures for a smooth upgrade. +- **Production Use**: Guidelines for optimal performance and reliability in production environments. +- **Troubleshooting**: Common issues and solutions to maintain server functionality. +- **Migration Guides**: Detailed instructions for moving between ZenML versions. -![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +For visual reference, an image of the ZenML Scarf is included. ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === -### Troubleshooting Tips for ZenML Deployment +# Troubleshooting ZenML Deployment -This document provides solutions for common issues encountered during ZenML deployment. +## Viewing Logs +To debug ZenML deployment issues, analyze logs based on your deployment method: -#### Viewing Logs - -**Kubernetes:** +### Kubernetes 1. Check running pods: ```bash kubectl -n get pods ``` -2. If pods are not running, view logs for all pods: +2. If pods aren't running, get logs for all pods: ```bash kubectl -n logs -l app.kubernetes.io/name=zenml ``` -3. For specific container logs (either `zenml-db-init` or `zenml`): +3. For specific container logs (use `zenml-db-init` for `Init` state errors): ```bash kubectl -n logs -l app.kubernetes.io/name=zenml -c ``` - Use `--tail` to limit lines or `--follow` for real-time logs. + - Use `--tail` to limit lines or `--follow` for real-time logs. -**Docker:** +### Docker - For `zenml login --local --docker`: ```shell zenml logs -f @@ -14332,24 +14395,22 @@ This document provides solutions for common issues encountered during ZenML depl docker compose -p zenml logs -f ``` -#### Fixing Database Connection Problems - +## Fixing Database Connection Problems Common MySQL connection issues: - **Access Denied**: Check username/password. - **Can't Connect**: Verify host settings. Test connection: ```bash mysql -h -u -p ``` - For Kubernetes, use `kubectl port-forward` to connect locally. +- For Kubernetes, use `kubectl port-forward` to connect to the database locally. -#### Fixing Database Initialization Problems - -If encountering `Revision not found` after migrating versions: +## Fixing Database Initialization Problems +If migrating to an older ZenML version results in `Revision not found` errors: 1. Log in to MySQL: ```bash mysql -h -u -p ``` -2. Drop the database: +2. Drop the existing database: ```sql drop database ; ``` @@ -14357,46 +14418,50 @@ If encountering `Revision not found` after migrating versions: ```sql create database ; ``` -4. Restart Kubernetes pods or Docker container to reinitialize the database. +4. Restart the Kubernetes pods or Docker container to reinitialize the database. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md === -### ZenML User Authentication Overview +# ZenML User Authentication Overview -You can authenticate clients with the ZenML Server using the ZenML CLI via the following command: +Authenticate clients with the ZenML Server using the ZenML CLI and web-based login via the command: ```bash zenml login https://... ``` -This command initiates a browser-based validation process. You can choose to trust the device, which issues a 30-day token, or not, which issues a 24-hour token. To view authorized devices, use: +This command initiates a browser-based validation process. You can choose to trust the device or not: +- **Trust this device**: Issues a 30-day token. +- **Do not trust**: Issues a 24-hour token. + +To view authorized devices, use: ```bash zenml authorized-device list ``` -For detailed information on a specific device, run: +To inspect a specific device: ```bash zenml authorized-device describe ``` -To enhance security, invalidate a token with: +For added security, invalidate a token with: ```bash zenml authorized-device lock ``` -### Summary of Steps: -1. Execute `zenml login ` to connect to the ZenML server. -2. Decide whether to trust the device. -3. List authorized devices with `zenml devices list`. +### Summary Steps: +1. Run `zenml login ` to connect to the ZenML server. +2. Decide on device trust. +3. List devices with `zenml devices list`. 4. Lock a device with `zenml device lock `. -### Important Notice -Using the ZenML CLI ensures secure interaction with ZenML tenants. Regularly manage device trust levels and revoke access by locking devices when necessary, as each token can potentially access sensitive data and infrastructure. +### Important Notice: +Using the ZenML CLI ensures secure interaction with your ZenML tenants. Regularly manage device trust levels and revoke access as necessary to protect data and infrastructure. Each token is a potential access point to sensitive information. ================================================== @@ -14404,84 +14469,78 @@ Using the ZenML CLI ensures secure interaction with ZenML tenants. Regularly man # ZenML Service Account Authentication -To authenticate to a ZenML server in non-interactive environments (e.g., CI/CD, serverless functions), create a service account and use an API key. +To authenticate to a ZenML server in non-interactive environments (e.g., CI/CD, serverless functions), create a service account and use an API key for authentication. -### Creating a Service Account -Create a service account and generate an API key: +## Creating a Service Account +To create a service account and generate an API key, run: ```bash zenml service-account create ``` -The API key is displayed in the output and cannot be retrieved later. +The API key will be displayed and cannot be retrieved later. -### Connecting with API Key +## Connecting with the API Key You can connect your ZenML client using one of the following methods: -1. **Using CLI**: +1. **CLI Method**: ```bash zenml login https://... --api-key ``` -2. **Setting Environment Variables** (recommended for automated environments): +2. **Environment Variables** (suitable for CI/CD or containerized environments): ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` - After setting these variables, you can interact with the server without running `zenml login`. + **Note**: No need to run `zenml login` after setting these variables. -### Managing Service Accounts and API Keys -- **List Service Accounts**: - ```bash - zenml service-account list - ``` -- **List API Keys**: - ```bash - zenml service-account api-key list - ``` -- **Describe Service Account**: - ```bash - zenml service-account describe - ``` -- **Describe API Key**: - ```bash - zenml service-account api-key describe - ``` +## Managing Service Accounts and API Keys +To list service accounts and their API keys: +```bash +zenml service-account list +zenml service-account api-key list +``` +To inspect a specific service account or API key: +```bash +zenml service-account describe +zenml service-account api-key describe +``` -### API Key Rotation +## API Key Rotation API keys do not expire, but it's recommended to rotate them regularly: ```bash zenml service-account api-key rotate ``` -To retain the old API key for a specified duration (e.g., 60 minutes): +To retain the old API key for a specified time (e.g., 60 minutes): ```bash zenml service-account api-key rotate --retain 60 ``` -### Deactivating Accounts and Keys +## Deactivating Service Accounts or API Keys To deactivate a service account or API key: ```bash zenml service-account update --active false zenml service-account api-key update --active false ``` -Deactivation takes immediate effect on all workloads. +Deactivation takes immediate effect. -### Summary of Steps +## Summary of Steps 1. Create a service account: `zenml service-account create`. -2. Connect using API key: `zenml login --api-key` or set environment variables. +2. Connect using API key: `zenml login --api-key`. 3. List service accounts: `zenml service-account list`. 4. List API keys: `zenml service-account api-key list`. 5. Rotate API keys: `zenml service-account api-key rotate`. 6. Deactivate accounts/keys: `zenml service-account update` or `zenml service-account api-key update`. -### Security Notice -Regularly rotate API keys and deactivate/delete unused service accounts and keys to protect your data and infrastructure. +### Important Notice +Regularly rotate API keys and deactivate/delete unused service accounts and API keys to secure access to your data and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md === -### Connecting to ZenML +# Connecting to ZenML -Once ZenML is deployed, there are multiple methods to connect to the server. +Once ZenML is deployed, there are multiple methods to connect to it. For detailed deployment instructions, refer to the [production guide](../../../user-guide/production-guide/deploying-zenml.md). @@ -14491,149 +14550,156 @@ For detailed deployment instructions, refer to the [production guide](../../../u === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === -### Migration Guide: ZenML 0.13.2 to 0.20.0 - -**Last Updated: 2023-07-24** - -ZenML 0.20.0 introduces significant architectural changes that may break compatibility with previous versions. This guide outlines the necessary steps to migrate existing ZenML stacks and pipelines. +# ZenML Migration Guide: Version 0.13.2 to 0.20.0 -#### Key Changes +**Last Updated:** 2023-07-24 -1. **Metadata Store**: ZenML now manages its own Metadata Store. If using remote Metadata Stores, transition to a ZenML server deployment. -2. **ZenML Dashboard**: A new dashboard is available for all ZenML deployments. -3. **Profiles Removal**: ZenML Profiles have been replaced by Projects. Existing Profiles must be manually migrated. -4. **Decoupled Configuration**: Stack Component configuration is now separate from implementation, requiring updates for custom components. -5. **Collaborative Features**: The ZenML server allows sharing of Stacks and Components among users. +## Overview +ZenML 0.20.0 introduces significant architectural changes that may not be backwards compatible. This guide outlines the necessary steps to migrate existing ZenML stacks and pipelines to the new version. -#### Migration Steps +### Key Changes +1. **Metadata Store**: ZenML now manages its own Metadata Store, eliminating the need for separate implementations. Existing remote Metadata Stores must be replaced with a ZenML server deployment. +2. **ZenML Dashboard**: A new dashboard is included for managing ZenML deployments. +3. **Profiles Removed**: ZenML Profiles have been replaced with Projects. Existing Profiles must be manually migrated. +4. **Decoupled Configuration**: Stack component configurations are now separate from their implementations. Custom stack components may require updates. +5. **Collaboration Features**: The new server allows sharing of stacks and components among users. -1. **Backup Metadata**: Before upgrading, back up all metadata stores. -2. **Upgrade ZenML**: Use `pip install zenml==0.20.0`. -3. **Connect to ZenML Server**: If using a server, run `zenml connect`. -4. **Migrate Pipeline Runs**: - - For local SQLite: - ```bash - zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db - ``` - - For other databases (e.g., MySQL): - ```bash - zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD - ``` +## Migration Steps -#### New Commands +### 1. Update ZenML +To revert to the previous version if needed: +```bash +pip install zenml==0.13.2 +``` -- **Deploy Server**: `zenml deploy --aws` -- **Start Local Server**: `zenml up` -- **Check Server Status**: `zenml status` +### 2. Migrate Pipeline Runs +Use the `zenml pipeline runs migrate` command to transfer existing pipeline run data: +- Backup your metadata stores before upgrading. +- Decide on your ZenML deployment model. +- Connect to your ZenML server if applicable. -#### Dashboard Access +**Migration Commands**: +- For SQLite: +```bash +zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db +``` +- For MySQL: +```bash +zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD +``` -Launch the ZenML Dashboard with: +### 3. Deploy ZenML Server +To deploy a local server: ```bash zenml up ``` -Access it at `http://127.0.0.1:8237`. - -#### Profile Migration +To connect to an existing server: +```bash +zenml connect +``` -To migrate Profiles: +### 4. Migrate Profiles +Profiles are deprecated; migrate them to Projects: 1. Update ZenML to 0.20.0. -2. Connect to the ZenML server. +2. Connect to your ZenML server. 3. Use: - ```bash - zenml profile migrate /path/to/profile - ``` - -#### Configuration Changes +```bash +zenml profile list +zenml profile migrate PATH/TO/PROFILE +``` +*Note: The Dashboard currently only shows the `default` Project.* -- **Class Renaming**: +### 5. Configuration Changes +- **Rename Classes**: - `Repository` → `Client` - `BaseStepConfig` → `BaseParameters` -- **New Configuration Paradigm**: Use `BaseSettings` for pipeline configurations, removing previous decorators like `@enable_xxx`. - -#### Example Migration - -For a step configuration: + +- **New Configuration Method**: + Use the `settings` parameter in decorators: ```python -@step( - experiment_tracker="mlflow_stack_comp_name", - settings={ - "experiment_tracker.mlflow": { - "experiment_name": "name", - "nested": False - } - } -) +@step(settings={"docker": DockerSettings(...)}) +def my_step() -> None: + ... ``` -#### Future Changes +### 6. Pipeline and Step Configuration +- Remove deprecated decorators like `@enable_xxx`. +- Use the new `BaseSettings` class for configurations. -- Potential removal of the secrets manager from the stack. -- Deprecation of `StepContext`. +### 7. Post-Execution Changes +Update post-execution workflows: +```python +from zenml.post_execution import get_pipelines, get_pipeline +``` -#### Reporting Issues +## Future Changes +Upcoming changes may include moving the secrets manager out of the stack and potential deprecation of `StepContext`. +## Reporting Issues For bugs or feature requests, engage with the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). -This guide provides a concise overview of the migration process to ZenML 0.20.0, ensuring critical information is retained while removing redundancy. +This guide provides a comprehensive overview of the migration process to ensure a smooth transition to ZenML 0.20.0. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === -### Migration Guide for ZenML +# ZenML Migration Guide -Migrations are required for ZenML releases with breaking changes, specifically for minor version increments (e.g., `0.X` to `0.Y`) and major version increments (e.g., `0.1` to `0.2`). +## Overview +This guide outlines the migration process for ZenML code when upgrading to new versions, particularly when breaking changes are introduced. -#### Release Type Examples -- **No Breaking Changes:** `0.40.2` to `0.40.3` (no migration needed) -- **Minor Breaking Changes:** `0.40.3` to `0.41.0` (migration required) -- **Major Breaking Changes:** `0.39.1` to `0.40.0` (significant changes in code usage) +### Versioning and Migration Types +- **No Breaking Changes**: Upgrades like `0.40.2` to `0.40.3` require no migration. +- **Minor Breaking Changes**: Upgrades such as `0.40.3` to `0.41.0` necessitate consideration of changes. +- **Major Breaking Changes**: Upgrades from `0.39.1` to `0.40.0` involve significant changes in code structure or usage. -#### Major Migration Guides +### Major Migration Guides Follow these guides sequentially if multiple migrations are needed: - [0.13.2 → 0.20.0](migration-zero-twenty.md) - [0.23.0 → 0.30.0](migration-zero-thirty.md) - [0.39.1 → 0.41.0](migration-zero-forty.md) - [0.58.2 → 0.60.0](migration-zero-sixty.md) -#### Release Notes -For minor breaking changes, refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes introduced. +### Release Notes +For minor breaking changes, refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for detailed information on changes introduced in each release. + +**Note**: This documentation is for an older version of ZenML. For the latest version, please visit [this up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === -### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2 Edition) +### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2) -#### Overview -ZenML has upgraded to Pydantic v2, introducing critical updates that may affect user experience due to stricter validation and dependency changes. Users may encounter new validation errors that were previously unnoticed. +**Overview**: ZenML has upgraded to Pydantic v2, introducing stricter validation and performance improvements. Users may encounter new validation errors due to these changes. -#### Key Dependency Changes +#### Key Dependency Changes: - **SQLModel**: Upgraded from `0.0.8` to `0.0.18` for compatibility with Pydantic v2. -- **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should consult the [migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). +- **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should review the [migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). -#### Pydantic v2 Features +#### Pydantic v2 Features: - Enhanced performance using Rust. -- New features in model design, configuration, validation, and serialization. -- For detailed changes, refer to the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/). - -#### Integration Changes -- **Airflow**: Removed dependencies due to Airflow's use of SQLAlchemy v1, allowing ZenML to create Airflow pipelines in a separate environment. Updated docs available [here](../../../component-guide/orchestrators/airflow.md). -- **AWS**: Upgraded `sagemaker` to version `2.172.0` to support `protobuf` 4. -- **Evidently**: Updated to versions `0.4.16` to `0.4.22` for Pydantic v2 compatibility. -- **Feast**: Removed incompatible `redis` dependency, ensuring functionality. -- **GCP**: Upgraded `kfp` dependency to v2, which has no Pydantic dependencies. Functional changes in the vertex step operator may occur. Migration guide [here](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). +- New features in model design, validation, and serialization. Refer to the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/) for details. + +#### Integration Updates: +- **Airflow**: Removed dependencies due to incompatibility with SQLAlchemy v1. Use ZenML to create Airflow pipelines in a separate environment. +- **AWS**: Updated `sagemaker` to version `2.172.0` to support `protobuf` 4. +- **Evidently**: Updated to version `0.4.16` for Pydantic v2 compatibility. +- **Feast**: Removed extra `redis` dependency for compatibility. +- **GCP**: Upgraded `kfp` to v2, which no longer requires Pydantic. Expect functional changes in the vertex step operator. - **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 support. -- **Kubeflow**: Similar to GCP, upgraded `kfp` to v2. Migration guide [here](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). -- **MLflow**: Compatible with both Pydantic versions, but may downgrade to v1 due to installation order. Deprecation warnings may appear. -- **Label Studio**: Updated to support Pydantic v2 in its 1.0 release. -- **Skypilot**: Compatibility issues with `azurecli` prevent installation of `skypilot[azure]`. Users should remain on the previous ZenML version until resolved. -- **TensorFlow**: Requires `tensorflow >=2.12.0` for compatibility with `protobuf` 4. Issues may arise on Python 3.8; higher Python versions are recommended. -- **Tekton**: Updated to use `kfp` v2, with documentation adjustments. +- **Kubeflow**: Similar to GCP, upgraded `kfp` to v2. +- **MLflow**: Compatible with both Pydantic versions, but may downgrade to v1 if installed incorrectly. Watch for deprecation warnings. +- **Label Studio**: Updated to support Pydantic v2. +- **Skypilot**: Incompatibility with `azurecli` prevents installation of `skypilot[azure]`. Users should remain on the previous ZenML version. +- **TensorFlow**: Requires `tensorflow>=2.12.0` due to dependency changes. Issues may arise with Python 3.8 on Ubuntu. +- **Tekton**: Updated to use `kfp` v2 for compatibility. -#### Important Note -Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integrations not supporting Pydantic v2. It is advisable to set up a fresh Python environment for a smoother transition. +#### Recommendations: +- Users may face dependency issues upon upgrading to ZenML 0.60.0, especially with integrations not supporting Pydantic v2. It is advisable to set up a fresh Python environment for a smoother transition. + +For more information, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== @@ -14641,11 +14707,13 @@ Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integra ### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 -**Important Note:** Migrating to `0.30.0` involves non-reversible database changes; downgrading to `<=0.23.0` is not possible afterward. If on an older version, first consult the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid migration issues. +**Important Notes:** +- This documentation is for older ZenML versions. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). +- Migrating to `0.30.0` involves non-reversible database changes; downgrading to `<=0.23.0` is not possible. If on an older version, follow the [0.20.0 Migration Guide](migration-zero-twenty.md) first. -**Key Changes:** +**Key Changes in ZenML 0.30.0:** - The `ml-pipelines-sdk` dependency has been removed. -- Pipeline runs and artifacts are now stored natively in the ZenML database. +- Pipeline runs and artifacts are now stored directly in the ZenML database. **Migration Steps:** 1. Install ZenML 0.30.0: @@ -14653,17 +14721,20 @@ Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integra pip install zenml==0.30.0 zenml version # Should return 0.30.0 ``` -2. The database migration will occur automatically upon executing any `zenml ...` CLI command after installation. +2. The database migration occurs automatically upon running any `zenml` CLI command after installation. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md === -### Migration Guide: ZenML 0.39.1 to 0.41.0 +# ZenML Migration Guide: Version 0.39.1 to 0.41.0 -ZenML versions 0.40.0 to 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future versions. +## Overview +ZenML versions 0.40.0 and 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future releases. + +## Migration Examples -#### Old Syntax Example +### Old Syntax ```python from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step @@ -14688,7 +14759,7 @@ pipeline_instance = my_pipeline(my_step=step_instance) pipeline_instance.run(schedule=Schedule(...)) ``` -#### New Syntax Example +### New Syntax ```python from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step @@ -14703,101 +14774,107 @@ def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[in def my_pipeline(): my_step(param_1=17) -my_pipeline = my_pipeline.with_options(enable_cache=False) +my_pipeline = my_pipeline.with_options(enable_cache=False, schedule=schedule) my_pipeline() ``` -### Key Changes in Syntax +## Key Changes + +### Defining Steps +- **Old:** Use `BaseParameters` to define parameters. +- **New:** Define parameters directly in the step function or use `pydantic.BaseModel`. + +### Running Steps +- **Old:** Call `step.entrypoint()`. +- **New:** Call the step directly. -1. **Step Definition**: - - Old: Parameters defined using `BaseParameters`. - - New: Parameters are directly defined as function arguments or can use `pydantic.BaseModel`. +### Defining Pipelines +- **Old:** Steps are arguments of the pipeline function. +- **New:** Steps are called directly within the pipeline function. -2. **Pipeline Definition**: - - Old: Steps passed as arguments to the pipeline function. - - New: Steps called directly within the pipeline function. +### Configuring Pipelines +- **Old:** Use `pipeline_instance.configure(...)`. +- **New:** Use `with_options(...)` method. -3. **Running Steps**: - - Old: Used `step.entrypoint()`. - - New: Call the step directly. +### Running Pipelines +- **Old:** Create an instance and call `run(...)`. +- **New:** Call the pipeline directly. -4. **Pipeline Configuration**: - - Old: Configured using `pipeline_instance.configure(...)`. - - New: Use `with_options(...)`. +### Scheduling Pipelines +- **Old:** Specify schedule in `run(...)`. +- **New:** Use `with_options(...)` to set the schedule. -5. **Fetching Outputs**: - - Old: Accessed via `last_run.get_step(...)`. - - New: Accessed via `last_run.steps[...]`. +### Fetching Pipeline Execution Results +- **Old:** Access runs via `get_runs()`. +- **New:** Use `last_run` to access the most recent execution. -6. **Step Execution Order**: - - Old: Used `step.after(...)`. - - New: Use `after` argument in the step call. +### Controlling Step Execution Order +- **Old:** Use `step.after(...)`. +- **New:** Pass `after` argument when calling a step. -7. **Multiple Outputs**: - - Old: Used `Output` class. - - New: Use `Tuple` with optional `Annotated` for custom names. +### Defining Steps with Multiple Outputs +- **Old:** Use `Output` class. +- **New:** Use `Tuple` with optional custom output names. -8. **Accessing Run Information**: - - Old: `StepContext` passed as an argument. - - New: Use `get_step_context()` to access context. +### Accessing Run Information Inside Steps +- **Old:** Pass `StepContext` as an argument. +- **New:** Use `get_step_context()` to access run information. -### Important Notes -- The new syntax is more flexible and concise. -- Existing pipelines and steps using the old syntax will continue to work but should be updated to avoid future issues. -- For further details on parameterizing steps, scheduling pipelines, and fetching metadata, refer to the respective ZenML documentation pages. +This guide provides a concise overview of the migration process from ZenML version 0.39.1 to 0.41.0, highlighting the key changes in syntax and functionality. For further details, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/configuring-zenml/configuring-zenml.md === -### Configuring ZenML's Default Behavior +# Configuring ZenML's Default Behavior -This guide outlines methods to configure ZenML's behavior in various situations. +This guide outlines how to configure ZenML's behavior in various situations. -**Key Points:** -- Users can adapt ZenML's settings to suit specific needs. -- Configuration options allow for customization of default behaviors. +### Key Points: +- ZenML allows customization of its default settings. +- Users can adapt ZenML to fit specific workflows and requirements. -For further details, refer to the full documentation. +For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/popular-integrations/skypilot.md === -### Summary of Skypilot with ZenML Documentation +### Summary of Using SkyPilot with ZenML -**Overview**: -The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and high GPU availability. +**Overview**: The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and high GPU availability. -**Prerequisites**: +#### Prerequisites - Install ZenML SkyPilot integration for your cloud provider: - `zenml integration install skypilot_` -- Docker must be installed and running. -- A remote artifact store and container registry in your ZenML stack. -- A remote ZenML deployment. -- Permissions to provision VMs on your cloud provider. -- A service connector configured for authentication (not required for Lambda Labs). + ```bash + zenml integration install skypilot_ + ``` +- Ensure Docker is running. +- Set up a remote artifact store and container registry. +- Have a remote ZenML deployment. +- Obtain permissions for VM provisioning. +- Configure a service connector for cloud authentication (not required for Lambda Labs). -**Configuration Steps**: +#### Configuration Steps -*For AWS, GCP, Azure*: +**For AWS, GCP, Azure**: 1. Install SkyPilot integration and connectors. 2. Register a service connector with necessary credentials. -3. Register the orchestrator and connect it to the service connector. +3. Register and connect the orchestrator to the service connector. 4. Register and activate a stack with the orchestrator. ```bash zenml service-connector register -skypilot-vm -t --auto-configure -zenml orchestrator register --flavor vm_ +zenml orchestrator register --flavor vm_ zenml orchestrator connect --connector -skypilot-vm zenml stack register -o ... --set ``` -*For Lambda Labs*: -1. Install SkyPilot Lambda integration. -2. Register a secret with your Lambda Labs API key. -3. Register the orchestrator with the API key. -4. Register and activate a stack with the orchestrator. +**For Lambda Labs**: +1. Install the SkyPilot Lambda integration. +2. Register a secret with your API key. +3. Register the orchestrator using the API key secret. +4. Register and activate a stack. ```bash zenml secret create lambda_api_key --scope user --api_key= @@ -14805,11 +14882,11 @@ zenml orchestrator register --flavor vm_lambda --api_key={{l zenml stack register -o ... --set ``` -**Running a Pipeline**: -After configuration, run any ZenML pipeline using the SkyPilot VM Orchestrator, with each step executing in a Docker container on a provisioned VM. +#### Running a Pipeline +Once configured, run ZenML pipelines with the SkyPilot VM Orchestrator, where each step executes in a Docker container on a provisioned VM. -**Additional Configuration**: -Customize the orchestrator using cloud-specific `Settings` objects: +#### Additional Configuration +Customize the orchestrator with cloud-specific `Settings` objects: ```python from zenml.integrations.skypilot_.flavors.skypilot_orchestrator__vm_flavor import SkypilotOrchestratorSettings @@ -14819,7 +14896,7 @@ skypilot_settings = SkypilotOrchestratorSettings( memory="16", accelerators="V100:2", use_spot=True, - region=, + region= ) @pipeline(settings={"orchestrator": skypilot_settings}) @@ -14828,54 +14905,57 @@ skypilot_settings = SkypilotOrchestratorSettings( Configure resources per step: ```python -high_resource_settings = SkypilotOrchestratorSettings(...) - -@step(settings={"orchestrator": high_resource_settings}) +@step(settings={"orchestrator": SkypilotOrchestratorSettings(...)}) def resource_intensive_step(): ... ``` -For detailed options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). +For more advanced options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === -### Summary of ZenML Kubernetes Orchestrator Documentation +### Summary: Deploying ZenML Pipelines on Kubernetes -**Overview**: The ZenML Kubernetes Orchestrator enables deployment of ML pipelines on a Kubernetes cluster without needing to write Kubernetes code, serving as a simpler alternative to tools like Airflow or Kubeflow. +The ZenML Kubernetes Orchestrator enables the execution of ML pipelines on a Kubernetes cluster without requiring Kubernetes code. It serves as a simpler alternative to orchestrators like Airflow or Kubeflow. -**Prerequisites**: +#### Prerequisites +To use the Kubernetes Orchestrator, ensure you have: - ZenML `kubernetes` integration installed: `zenml integration install kubernetes` - Docker and `kubectl` installed -- Remote artifact store and container registry in ZenML stack -- Deployed Kubernetes cluster -- Configured `kubectl` context (optional) +- A remote artifact store and container registry in your ZenML stack +- A deployed Kubernetes cluster +- (Optional) A configured `kubectl` context for the cluster -**Deployment Steps**: -1. **Register the Orchestrator**: - - Using a Service Connector (recommended for cloud-managed clusters): - ```bash - zenml orchestrator register --flavor kubernetes - zenml service-connector list-resources --resource-type kubernetes-cluster -e - zenml orchestrator connect --connector - zenml stack register -o ... --set - ``` +#### Deploying the Orchestrator +You need a Kubernetes cluster to run the orchestrator. Various deployment methods exist, which can be explored in the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md). - - Configuring `kubectl` context: - ```bash - zenml orchestrator register --flavor=kubernetes --kubernetes_context= - zenml stack register -o ... --set - ``` +#### Configuring the Orchestrator +Configuration can be done in two ways: -**Running a Pipeline**: -- Execute the pipeline with: - ```bash - python your_pipeline.py - ``` -This command creates a Kubernetes pod for each pipeline step. Interaction with the pods can be done using `kubectl` commands. +1. **Using a Service Connector** (recommended for cloud-managed clusters): + ```bash + zenml orchestrator register --flavor kubernetes + zenml service-connector list-resources --resource-type kubernetes-cluster -e + zenml orchestrator connect --connector + zenml stack register -o ... --set + ``` + +2. **Using `kubectl` Context**: + ```bash + zenml orchestrator register --flavor=kubernetes --kubernetes_context= + zenml stack register -o ... --set + ``` + +#### Running a Pipeline +To execute a ZenML pipeline with the Kubernetes Orchestrator, run: +```bash +python your_pipeline.py +``` +This command creates a Kubernetes pod for each pipeline step. Use `kubectl` commands to interact with the pods. -For detailed configuration options, refer to the full Kubernetes Orchestrator documentation. +For further details, refer to the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). ================================================== @@ -14883,12 +14963,12 @@ For detailed configuration options, refer to the full Kubernetes Orchestrator do # Minimal GCP Stack Setup Guide -This guide outlines the steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. +This guide outlines the steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ## Steps to Set Up ### 1. Choose a GCP Project -Select or create a GCP project in the console. Ensure a billing account is attached. +Select or create a Google Cloud project in the console. Ensure a billing account is attached. ```bash gcloud projects create --billing-project= @@ -14908,14 +14988,14 @@ Create a service account with the following roles: - Storage Object Admin ### 4. Create a JSON Key for the Service Account -Download the JSON key file for the service account. +Generate a JSON key for the service account: ```bash export JSON_KEY_FILE_PATH= ``` ### 5. Create a Service Connector in ZenML -Authenticate ZenML with GCP using the service account. +Authenticate ZenML with GCP using the service account: ```bash zenml integration install gcp \ @@ -14929,7 +15009,7 @@ zenml integration install gcp \ ### 6. Create Stack Components #### Artifact Store -Create a GCS bucket and register it as an artifact store. +Create a GCS bucket and register it as an artifact store: ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store @@ -14938,7 +15018,7 @@ zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` #### Orchestrator -Use Vertex AI as the orchestrator. +Register Vertex AI as the orchestrator: ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator @@ -14947,7 +15027,7 @@ zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` #### Container Registry -Register a container registry. +Register a GCP container registry: ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry @@ -14956,7 +15036,7 @@ zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` ### 7. Create Stack -Register the stack with the created components. +Register the stack with the created components: ```bash export STACK_NAME=gcp_stack @@ -14964,33 +15044,33 @@ zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_N ``` ## Cleanup -To delete the project and all associated resources: +To remove all created resources, delete the project: ```bash gcloud project delete ``` ## Best Practices -- **Use IAM and Least Privilege Principle:** Grant minimum permissions necessary for ZenML. -- **Leverage GCP Resource Labeling:** Use labels for better resource management. +- **IAM and Least Privilege**: Grant minimal permissions for security. +- **Resource Labeling**: Use labels for better cost tracking and organization. ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production ``` -- **Implement Cost Management Strategies:** Monitor spending and set budget alerts. +- **Cost Management**: Monitor spending using GCP’s Cost Management tools and set budget alerts. ```bash gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-name="ZenML Monthly Budget" --budget-amount=1000 --threshold-rule=percent=90 ``` -- **Implement a Robust Backup Strategy:** Regularly back up data and enable versioning. +- **Backup Strategy**: Regularly back up data and enable versioning on GCS buckets. ```bash gsutil versioning set on gs://your-bucket-name ``` -By following these steps and best practices, you can effectively set up and manage a GCP stack for ZenML projects. +By following these steps and best practices, you can efficiently set up and manage a GCP stack for ZenML projects. ================================================== @@ -14998,33 +15078,37 @@ By following these steps and best practices, you can effectively set up and mana # Azure Stack Setup for ZenML Pipelines -This guide outlines the steps to create a minimal production stack on Azure for running ZenML pipelines. +This guide outlines the steps to set up a minimal production stack on Azure for running ZenML pipelines. ## Prerequisites -- Active Azure account. -- ZenML installed. -- ZenML Azure integration: `zenml integration install azure`. +- Active Azure account +- ZenML installed +- ZenML Azure integration installed: + ```bash + zenml integration install azure + ``` + +## Steps to Set Up Azure Stack -## 1. Set Up Credentials +### 1. Set Up Credentials Create a service principal in Azure: 1. Go to Azure Portal > App Registrations > `+ New registration`. -2. Name your app and register it. -3. Note the Application ID and Tenant ID. -4. In `Certificates & secrets`, create a client secret and save the secret value. +2. Register and note the Application ID and Tenant ID. +3. Under `Certificates & secrets`, create a client secret and note its value. -## 2. Create Resource Group and AzureML Instance -1. Go to Azure Portal > Resource Groups > `+ Create`. -2. After creation, go to the new resource group's overview and click `+ Create`. -3. Search for and select `Azure Machine Learning` to create an AzureML workspace. Consider creating a container registry as well. +### 2. Create Resource Group and AzureML Instance +1. In the Azure Portal, navigate to `Resource Groups` and click `+ Create`. +2. After creating the resource group, click `+ Create` to add an Azure Machine Learning workspace. Optionally, create a container registry. -## 3. Create Role Assignments +### 3. Create Role Assignments 1. In your resource group, go to `Access control (IAM)` > `+ Add role assignment`. -2. Assign the following roles: `AzureML Compute Operator`, `AzureML Data Scientist`, and `AzureML Registry User`. -3. Select your registered app by its ID for each role. +2. Assign the following roles to your registered app: + - AzureML Compute Operator + - AzureML Data Scientist + - AzureML Registry User -## 4. Create a Service Connector +### 4. Create a Service Connector Register a ZenML Azure Service Connector: - ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ @@ -15033,20 +15117,18 @@ zenml service-connector register azure_connector --type azure \ --client_id= ``` -## 5. Create Stack Components -### Artifact Store (Azure Blob Storage) +### 5. Create Stack Components +#### Artifact Store (Azure Blob Storage) 1. Create a container in your AzureML workspace's storage account. 2. Register the artifact store: - ```bash zenml artifact-store register azure_artifact_store -f azure \ --path= \ --connector azure_connector ``` -### Orchestrator (AzureML) +#### Orchestrator (AzureML) Register the orchestrator: - ```bash zenml orchestrator register azure_orchestrator -f azureml \ --subscription_id= \ @@ -15055,19 +15137,17 @@ zenml orchestrator register azure_orchestrator -f azureml \ --connector azure_connector ``` -### Container Registry (Azure Container Registry) +#### Container Registry (Azure Container Registry) Register the container registry: - ```bash zenml container-registry register azure_container_registry -f azure \ --uri= \ --connector azure_connector ``` -## 6. Create a Stack +### 6. Create a Stack Register the Azure ZenML stack: - -```shell +```bash zenml stack register azure_stack \ -o azure_orchestrator \ -a azure_artifact_store \ @@ -15075,9 +15155,8 @@ zenml stack register azure_stack \ --set ``` -## 7. Run a Pipeline -Define and run a ZenML pipeline: - +### 7. Run a Pipeline +Define and run a simple ZenML pipeline: ```python from zenml import pipeline, step @@ -15092,37 +15171,37 @@ def azure_pipeline(): if __name__ == "__main__": azure_pipeline() ``` - Save as `run.py` and execute: - -```shell +```bash python run.py ``` -## Next Steps +### Next Steps - Explore ZenML's production guide for best practices. - Investigate ZenML integrations with other tools. -- Join the ZenML community for support and networking. +- Join the ZenML community for support. + +For the latest documentation, visit [ZenML Docs](https://docs.zenml.io). ================================================== === File: docs/book/how-to/popular-integrations/kubeflow.md === -### Kubeflow Orchestrator Overview +# Kubeflow Orchestrator with ZenML -The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without writing Kubeflow code. +The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without needing to write Kubeflow code. -### Prerequisites +## Prerequisites To use the Kubeflow Orchestrator, ensure you have: - ZenML `kubeflow` integration installed: `zenml integration install kubeflow` - Docker installed and running - (Optional) `kubectl` installed - A Kubernetes cluster with Kubeflow Pipelines - A remote artifact store and container registry in your ZenML stack -- A remote ZenML server deployed +- A remote ZenML server deployed in the cloud - (Optional) Kubernetes context name for the remote cluster -### Configuring the Orchestrator +## Configuring the Orchestrator You can configure the orchestrator in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): @@ -15133,20 +15212,20 @@ You can configure the orchestrator in two ways: zenml stack update -o ``` -2. **Using `kubectl` with a context**: +2. **Using `kubectl`** with a context: ```bash zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack update -o ``` -### Running a Pipeline -To run a ZenML pipeline using the Kubeflow Orchestrator: +## Running a Pipeline +To run a ZenML pipeline: ```python python your_pipeline.py ``` -This creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. +This command creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. -### Additional Configuration +## Additional Configuration Further configure the orchestrator with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings @@ -15163,8 +15242,8 @@ kubeflow_settings = KubeflowOrchestratorSettings( @pipeline(settings={"orchestrator": kubeflow_settings}) ``` -### Multi-Tenancy Deployments -For multi-tenant setups, register the orchestrator with the `kubeflow_hostname`: +## Multi-Tenancy Deployments +For multi-tenant setups, specify the `kubeflow_hostname`: ```bash zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` @@ -15179,7 +15258,7 @@ kubeflow_settings = KubeflowOrchestratorSettings( @pipeline(settings={"orchestrator": kubeflow_settings}) ``` -For more details, refer to the full [Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). +For more details, refer to the [full Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). ================================================== @@ -15187,18 +15266,17 @@ For more details, refer to the full [Kubeflow Orchestrator documentation](../../ # AWS Stack Setup for ZenML Pipelines -## Overview -This guide provides steps to create a minimal production stack on AWS for running ZenML pipelines, including setting up IAM roles, service connectors, and stack components. +This guide outlines the steps to create a minimal AWS stack for running ZenML pipelines. It includes setting up IAM roles, service connectors, and stack components. ## Prerequisites - Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. -- AWS CLI installed and configured. +- AWS CLI configured with your credentials. ## Steps ### 1. Set Up Credentials and Local Environment -1. **Choose AWS Region**: Select the region for your ZenML stack (e.g., `us-east-1`). +1. **Choose AWS Region**: Select a region (e.g., `us-east-1`). 2. **Create IAM Role**: - Get your AWS account ID: ```shell @@ -15230,7 +15308,8 @@ This guide provides steps to create a minimal production stack on AWS for runnin aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` -3. **Install ZenML AWS Integration**: + +3. **Install ZenML Integrations**: ```shell zenml integration install aws s3 -y ``` @@ -15259,7 +15338,7 @@ zenml service-connector register aws_connector \ ``` #### Orchestrator (SageMaker Pipelines) -1. Create a SageMaker domain (if not already created). +1. Create a SageMaker domain (follow AWS documentation). 2. Register the SageMaker orchestrator: ```shell zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region= --execution_role= @@ -15275,7 +15354,7 @@ zenml service-connector register aws_connector \ zenml container-registry register ecr-registry --flavor=aws --uri=.dkr.ecr..amazonaws.com --connector aws_connector ``` -### 4. Create the ZenML Stack +### 4. Create the Stack ```shell export STACK_NAME=aws_stack @@ -15284,7 +15363,7 @@ zenml stack register ${STACK_NAME} -o sagemaker-orchestrator \ ``` ### 5. Run a Pipeline -Define and run a simple ZenML pipeline: +Define and run a ZenML pipeline: ```python from zenml import pipeline, step @@ -15299,13 +15378,13 @@ def aws_sagemaker_pipeline(): if __name__ == "__main__": aws_sagemaker_pipeline() ``` -Execute: +Run the pipeline: ```shell python run.py ``` ## Cleanup -To avoid charges, delete AWS resources: +To avoid charges, delete resources: ```shell # Delete S3 bucket aws s3 rm s3://your-bucket-name --recursive @@ -15317,52 +15396,58 @@ aws sagemaker delete-domain --domain-id # Delete ECR repository aws ecr delete-repository --repository-name zenml --force -# Detach policies from IAM role +# Detach policies and delete IAM role aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess - -# Delete IAM role aws iam delete-role --role-name zenml-role ``` ## Conclusion -This guide outlines the steps to set up an AWS stack for ZenML, enabling scalable and efficient machine learning pipelines. Key components include IAM roles, S3 for artifact storage, SageMaker for orchestration, and ECR for container management. Follow best practices for security and cost management to optimize your AWS stack usage. +This guide provides a streamlined process to set up an AWS stack for ZenML, enabling scalable and efficient machine learning pipeline management. Key steps include IAM role creation, service connector registration, and stack component configuration. For best practices, consider IAM role management, resource tagging, cost management, and backup strategies. ================================================== === File: docs/book/how-to/popular-integrations/mlflow.md === -### MLflow Experiment Tracker with ZenML +# MLflow Experiment Tracker with ZenML -The ZenML MLflow Experiment Tracker integration allows for logging and visualizing pipeline information using MLflow without additional code. +## Overview +The MLflow Experiment Tracker integration in ZenML allows logging and visualization of pipeline step information using MLflow without additional code. -#### Prerequisites -- Install ZenML MLflow integration: +## Prerequisites +- Install ZenML MLflow integration: ```bash zenml integration install mlflow -y ``` -- Set up an MLflow deployment (local or remote with proxied artifact storage). +- Set up an MLflow deployment (local or remote). -#### Configuring the Experiment Tracker -1. **Local Deployment**: - - Suitable for local ZenML runs; no extra configuration needed. +## Configuring the Experiment Tracker +### Deployment Scenarios +1. **Local Deployment**: Uses a local artifact store. No extra configuration needed. ```bash zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` -2. **Remote Deployment**: - - Requires authentication (ZenML secrets recommended). +2. **Remote Deployment**: Requires authentication. Recommended to use ZenML secrets. + Create a secret: ```bash zenml secret create mlflow_secret --username= --password= - zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... + ``` + Register the experiment tracker: + ```bash + zenml experiment-tracker register mlflow \ + --flavor=mlflow \ + --tracking_username={{mlflow_secret.username}} \ + --tracking_password={{mlflow_secret.password}} \ + ... ``` -#### Using the Experiment Tracker +## Using the Experiment Tracker To log information in a pipeline step: -1. Enable the experiment tracker with the `@step` decorator. -2. Use MLflow's logging capabilities. +1. Enable the experiment tracker with `@step` decorator. +2. Use MLflow's logging capabilities. ```python import mlflow @@ -15374,21 +15459,25 @@ To log information in a pipeline step: mlflow.log_artifact(...) ``` -#### Viewing Results -Retrieve the MLflow experiment URL for a ZenML run: +## Viewing Results +Retrieve the MLflow experiment URL for a ZenML run: ```python last_run = client.get_pipeline("").last_run -tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value +trainer_step = last_run.get_step("") +tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value ``` -#### Additional Configuration -Further configure the experiment tracker using `MLFlowExperimentTrackerSettings`: +## Additional Configuration +Configure the experiment tracker using `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) -@step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) +@step( + experiment_tracker="", + settings={"experiment_tracker": mlflow_settings} +) ``` For more details, refer to the [full MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). @@ -15399,13 +15488,13 @@ For more details, refer to the [full MLflow Experiment Tracker documentation](.. # ZenML Integrations Guide -ZenML integrates seamlessly with popular tools in the data science and machine learning ecosystem. This guide provides instructions for integrating ZenML with these tools. +ZenML integrates with popular tools in the data science and machine learning ecosystem. This guide provides instructions for seamless integration. -**Key Points:** -- ZenML is designed for compatibility with various data science and machine learning tools. -- The integration process is straightforward, enhancing workflow efficiency. +## Key Points +- ZenML is designed for compatibility with various tools. +- The integration process is straightforward and user-friendly. -For specific integration examples and detailed steps, refer to the respective sections in the documentation. +For detailed integration steps, refer to the specific tool documentation. ================================================== @@ -15413,35 +15502,36 @@ For specific integration examples and detailed steps, refer to the respective se ### ZenML Template Creation and Execution -**Feature Note:** This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. +**Overview**: This documentation outlines how to create and run templates using the ZenML Python SDK. Note that this feature is exclusive to ZenML Pro users. #### Create a Template -To create a run template using the ZenML client, ensure you select a pipeline run executed on a remote stack: - -```python -from zenml.client import Client - -run = Client().get_pipeline_run() -Client().create_run_template(name=, deployment_id=run.deployment_id) -``` +1. **Using an Existing Pipeline Run**: + ```python + from zenml.client import Client -Alternatively, create a template directly from your pipeline definition while using a remote stack: + run = Client().get_pipeline_run() + Client().create_run_template( + name=, + deployment_id=run.deployment_id + ) + ``` + - Ensure the pipeline run was executed on a **remote stack**. -```python -from zenml import pipeline +2. **From Pipeline Definition**: + ```python + from zenml import pipeline -@pipeline -def my_pipeline(): - ... + @pipeline + def my_pipeline(): + ... -template = my_pipeline.create_run_template(name=) -``` + template = my_pipeline.create_run_template(name=) + ``` #### Run a Template -To execute a template, retrieve it and trigger the pipeline: - +To execute a created template: ```python from zenml.client import Client @@ -15450,15 +15540,16 @@ config = template.config_template # [OPTIONAL] Modify the config here -Client().trigger_pipeline(template_id=template.id, run_configuration=config) +Client().trigger_pipeline( + template_id=template.id, + run_configuration=config, +) ``` - -The new run will execute on the same stack as the original. +- The new run will execute on the same stack as the original. #### Advanced Usage: Run a Template from Another Pipeline -You can trigger one pipeline from another using the following structure: - +You can trigger one pipeline from another: ```python import pandas as pd from zenml import pipeline, step @@ -15492,34 +15583,38 @@ def loads_data_and_triggers_training(): trigger_pipeline(df) # Triggers the training pipeline ``` -For more details, refer to the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) documentation, as well as information on Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). +**Additional Resources**: +- For more details on `PipelineRunConfiguration` and `trigger_pipeline`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client). +- Learn about Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-cli.md === -### ZenML CLI: Create a Run Template +### Create a Template Using the ZenML CLI + +**Note:** This is an older version of the ZenML documentation. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). -**Feature Access**: This feature is available only in ZenML Pro. [Sign up here](https://cloud.zenml.io) for access. +**Feature Availability:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. + +#### Command to Create a Run Template -**Command to Create a Template**: Use the following command to create a run template with the ZenML CLI: ```bash zenml pipeline create-run-template --name= ``` -- ``: Should be `run.my_pipeline` if defined in `run.py`. +- ``: Use `run.my_pipeline` if your pipeline is named `my_pipeline` in `run.py`. -**Requirements**: -- An active **remote stack** is required. You can specify one using the `--stack` option. +**Important:** Ensure you have an active **remote stack** when executing this command, or specify one using the `--stack` option. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === -### ZenML Dashboard: Creating and Running Templates +### ZenML Dashboard Template Management -**Feature Availability**: This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. +**Overview**: This documentation describes how to create and run templates in the ZenML Dashboard. Note that this feature is exclusive to ZenML Pro users. #### Creating a Template 1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). @@ -15532,17 +15627,19 @@ zenml pipeline create-run-template --name= You will be directed to the `Run Details` page, where you can: - Upload a `.yaml` configuration file or -- Modify the configuration using the editor. +- Modify configurations using the editor. + +After initiating the run, it will execute on the same stack as the original run. -Upon running the template, a new run will initiate on the same stack as the original run. +For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/trigger-pipelines/README.md === -### Trigger a Pipeline in ZenML +### Triggering a Pipeline in ZenML -ZenML allows you to trigger a pipeline using a simple function decorated with `@pipeline`. Below is an example of a basic pipeline that loads data and trains a model: +In ZenML, pipelines can be triggered in various ways. The simplest method is to use a pipeline function directly: ```python from zenml import step, pipeline @@ -15566,62 +15663,47 @@ if __name__ == "__main__": ### Run Templates -**Run Templates** are pre-defined, parameterized configurations for executing ZenML pipelines. They can be customized and executed from the ZenML dashboard or via the Client/REST API. This feature is exclusive to ZenML Pro users. +**Run Templates** are pre-defined, parameterized configurations for ZenML pipelines, allowing easy execution from the ZenML dashboard or via the Client/REST API. They serve as customizable blueprints for pipeline runs. This feature is exclusive to ZenML Pro users. -For more details, refer to the following resources: -- Use templates: Python SDK -- Use templates: CLI -- Use templates: Dashboard -- Use templates: REST API - -![Working with Templates](../../../.gitbook/assets/run-templates.gif) - -This documentation provides a concise overview of triggering pipelines and utilizing Run Templates in ZenML. +For more details on using templates, refer to the following resources: +- [Use templates: Python SDK](use-templates-python.md) +- [Use templates: CLI](use-templates-cli.md) +- [Use templates: Dashboard](use-templates-dashboard.md) +- [Use templates: REST API](use-templates-rest-api.md) ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md === -### ZenML REST API: Creating and Running a Template - -**Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. - -### Triggering a Pipeline via REST API +### ZenML REST API: Running a Template -To trigger a pipeline, ensure you have at least one run template created. Follow these steps: +**Note:** This documentation refers to an older version of ZenML. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). This feature is available only for ZenML Pro users; sign up [here](https://cloud.zenml.io). -1. **Get Pipeline ID:** - - Call: `GET /pipelines?name=` - - Response includes ``. - -2. **Get Template ID:** - - Call: `GET /run_templates?pipeline_id=` - - Response includes ``. - -3. **Run the Pipeline:** - - Call: `POST /run_templates//runs` with optional `PipelineRunConfiguration` in the body. - -### Example Workflow +#### Prerequisites +To trigger a pipeline via the REST API, you must have at least one run template created for that pipeline and know the pipeline name. -To re-run a pipeline named `training`, execute the following commands: +#### Steps to Trigger a Pipeline -1. **Retrieve Pipeline ID:** +1. **Get Pipeline ID** + - Call the endpoint to retrieve the pipeline ID: ```shell curl -X 'GET' \ - '/api/v1/pipelines?hydrate=false&name=training' \ + '/api/v1/pipelines?name=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` -2. **Retrieve Template ID:** +2. **Get Template ID** + - Use the pipeline ID to fetch available run templates: ```shell curl -X 'GET' \ - '/api/v1/run_templates?hydrate=false&pipeline_id=' \ + '/api/v1/run_templates?pipeline_id=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` -3. **Trigger the Pipeline:** +3. **Trigger the Pipeline** + - Use the template ID to run the pipeline with a specified configuration: ```shell curl -X 'POST' \ '/api/v1/run_templates//runs' \ @@ -15633,9 +15715,10 @@ To re-run a pipeline named `training`, execute the following commands: }' ``` -A successful response indicates the pipeline has been re-triggered with the specified configuration. +A successful response indicates that the pipeline has been re-triggered with the new configuration. -For details on obtaining a bearer token, refer to the [API reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). +#### Additional Information +For details on obtaining a bearer token for API access, refer to the [API Reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). ================================================== @@ -15643,62 +15726,60 @@ For details on obtaining a bearer token, refer to the [API reference](../../../r # Infrastructure and Deployment Summary -This section details the infrastructure setup and deployment processes for ZenML. Key points include: +This section details the infrastructure setup and deployment processes in ZenML. Key components include: -1. **Infrastructure Requirements**: - - ZenML can be deployed on various cloud platforms (AWS, GCP, Azure) or on-premises. - - Ensure the environment meets the necessary hardware and software specifications. +1. **Infrastructure Requirements**: ZenML can be deployed on various cloud providers (AWS, GCP, Azure) or on-premises. Ensure compatibility with Kubernetes for orchestration. 2. **Deployment Options**: - - **Managed Services**: Utilize cloud providers' managed services for ease of use and scalability. - - **Self-Hosted**: Deploy ZenML on your own servers for greater control and customization. + - **Managed Services**: Utilize cloud-native services for ease of setup and maintenance. + - **Self-Managed**: Deploy on your own Kubernetes cluster for greater control. -3. **Configuration**: - - Define configuration files (e.g., `zenml.yaml`) to specify pipeline components, storage, and execution environments. - - Use environment variables for sensitive information (e.g., API keys). - -4. **Installation**: - - Install ZenML via pip: +3. **Installation**: + - Use `pip` to install ZenML: ```bash pip install zenml ``` - - Initialize a new ZenML repository: + +4. **Configuration**: + - Configure your ZenML environment with: ```bash zenml init ``` + - Set up the backend (e.g., MLflow, S3) for artifact storage and tracking. 5. **Pipeline Deployment**: - - Create pipelines using decorators and define steps with functions. - - Example pipeline setup: + - Define pipelines using decorators and run them with: ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): - # Define pipeline steps here - pass + # pipeline steps here + + my_pipeline.run() ``` -6. **Monitoring and Maintenance**: - - Implement logging and monitoring to track pipeline performance. - - Regularly update ZenML to benefit from new features and security patches. +6. **Monitoring and Logging**: Integrate with monitoring tools (e.g., Prometheus) for performance tracking and logging. + +7. **Security**: Implement role-based access control (RBAC) and secure data handling practices. -This summary encapsulates the essential elements of infrastructure and deployment in ZenML, ensuring clarity and focus on critical technical details. +This summary encapsulates the essential aspects of infrastructure and deployment in ZenML, ensuring that critical information is retained for effective understanding and application. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/terraform-stack-management.md === -### Summary: Registering Existing Infrastructure with ZenML - A Guide for Terraform Users +# Summary: Registering Existing Infrastructure with ZenML for Terraform Users -This guide helps advanced users integrate ZenML with existing Terraform setups, focusing on managing custom Terraform code using the ZenML provider. +## Overview +This guide details how to integrate ZenML with existing Terraform-managed infrastructure, specifically for advanced users managing custom Terraform code. It emphasizes a two-phase approach: Infrastructure Deployment and ZenML Registration. -#### Two-Phase Approach +## Two-Phase Approach 1. **Infrastructure Deployment**: Create cloud resources (handled by platform teams). 2. **ZenML Registration**: Register these resources as ZenML stack components. -#### Phase 1: Infrastructure Deployment -You may already have existing Terraform configurations, such as: +## Phase 1: Infrastructure Deployment +You may already have existing Terraform configurations for your infrastructure, such as: ```hcl resource "google_storage_bucket" "ml_artifacts" { @@ -15712,9 +15793,9 @@ resource "google_artifact_registry_repository" "ml_containers" { } ``` -#### Phase 2: ZenML Registration +## Phase 2: ZenML Registration -**Setup the ZenML Provider**: +### Setup the ZenML Provider Configure the ZenML provider to connect with your ZenML server: ```hcl @@ -15727,191 +15808,92 @@ terraform { } provider "zenml" { - # Configuration options from environment variables: - # ZENML_SERVER_URL - # ZENML_API_KEY -} -``` - -**Generate API Key**: -```bash -zenml service-account create -``` - -**Create Service Connectors**: -Service connectors manage authentication: - -```hcl -resource "zenml_service_connector" "gcp_connector" { - name = "gcp-${var.environment}-connector" - type = "gcp" - auth_method = "service-account" - - configuration = { - project_id = var.project_id - service_account_json = file("service-account.json") - } -} - -resource "zenml_stack_component" "artifact_store" { - name = "existing-artifact-store" - type = "artifact_store" - flavor = "gcp" - - configuration = { - path = "gs://${google_storage_bucket.ml_artifacts.name}" - } - - connector_id = zenml_service_connector.gcp_connector.id -} -``` - -**Register Stack Components**: -Register various component types: - -```hcl -locals { - component_configs = { - artifact_store = { type = "artifact_store", flavor = "gcp", configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } - container_registry = { type = "container_registry", flavor = "gcp", configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } - orchestrator = { type = "orchestrator", flavor = "vertex", configuration = { project = var.project_id, region = var.region } } - } -} - -resource "zenml_stack_component" "components" { - for_each = local.component_configs - - name = "existing-${each.key}" - type = each.value.type - flavor = each.value.flavor - configuration = each.value.configuration - connector_id = zenml_service_connector.gcp_connector.id -} -``` - -**Assemble the Stack**: -Combine components into a stack: - -```hcl -resource "zenml_stack" "ml_stack" { - name = "${var.environment}-ml-stack" - - components = { - for k, v in zenml_stack_component.components : k => v.id - } + # Configuration options loaded from environment variables } ``` -### Practical Example: Registering Existing GCP Infrastructure - -**Prerequisites**: -- GCS bucket for artifacts -- Artifact Registry repository -- Service account for ML operations -- Vertex AI enabled for orchestration - -**Variables Configuration**: -```hcl -variable "zenml_server_url" { type = string } -variable "zenml_api_key" { type = string, sensitive = true } -variable "project_id" { type = string } -variable "region" { type = string, default = "us-central1" } -variable "environment" { type = string } -variable "gcp_service_account_key" { type = string, sensitive = true } -``` - -**Main Configuration**: -```hcl -terraform { - required_providers { - zenml = { source = "zenml-io/zenml" } - google = { source = "hashicorp/google" } - } -} - -provider "zenml" { - server_url = var.zenml_server_url - api_key = var.zenml_api_key -} - -provider "google" { - project = var.project_id - region = var.region -} +Generate an API key with: -resource "google_storage_bucket" "artifacts" { - name = "${var.project_id}-zenml-artifacts-${var.environment}" - location = var.region -} +```bash +zenml service-account create +``` -resource "google_artifact_registry_repository" "containers" { - location = var.region - repository_id = "zenml-containers-${var.environment}" - format = "DOCKER" -} +### Create Service Connectors +Service connectors manage authentication for ZenML components: -resource "zenml_service_connector" "gcp" { - name = "gcp-${var.environment}" +```hcl +resource "zenml_service_connector" "gcp_connector" { + name = "gcp-${var.environment}-connector" type = "gcp" - auth_method = "service-account" + auth_method = "service-account" + configuration = { project_id = var.project_id - region = var.region - service_account_json = var.gcp_service_account_key + service_account_json = file("service-account.json") } } resource "zenml_stack_component" "artifact_store" { - name = "gcs-${var.environment}" + name = "existing-artifact-store" type = "artifact_store" flavor = "gcp" - configuration = { path = "gs://${google_storage_bucket.artifacts.name}/artifacts" } - connector_id = zenml_service_connector.gcp.id -} - -resource "zenml_stack" "gcp_stack" { - name = "gcp-${var.environment}" - components = { - artifact_store = zenml_stack_component.artifact_store.id - container_registry = zenml_stack_component.container_registry.id - orchestrator = zenml_stack_component.orchestrator.id + + configuration = { + path = "gs://${google_storage_bucket.ml_artifacts.name}" } + + connector_id = zenml_service_connector.gcp_connector.id } ``` -**Outputs Configuration**: +### Register Stack Components +Register components using a generic pattern: + ```hcl -output "stack_id" { - value = zenml_stack.gcp_stack.id +locals { + component_configs = { + artifact_store = { type = "artifact_store", flavor = "gcp", configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } + container_registry = { type = "container_registry", flavor = "gcp", configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } + orchestrator = { type = "orchestrator", flavor = "vertex", configuration = { project = var.project_id, region = var.region } } + } } -output "stack_name" { - value = zenml_stack.gcp_stack.name +resource "zenml_stack_component" "components" { + for_each = local.component_configs + + name = "existing-${each.key}" + type = each.value.type + flavor = each.value.flavor + configuration = each.value.configuration + connector_id = zenml_service_connector.gcp_connector.id } +``` -output "artifact_store_path" { - value = "${google_storage_bucket.artifacts.name}/artifacts" -} +### Assemble the Stack +Combine components into a ZenML stack: -output "container_registry_uri" { - value = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" +```hcl +resource "zenml_stack" "ml_stack" { + name = "${var.environment}-ml-stack" + + components = { + for k, v in zenml_stack_component.components : k => v.id + } } ``` -**terraform.tfvars Configuration**: -```hcl -zenml_server_url = "https://your-zenml-server.com" -project_id = "your-gcp-project-id" -region = "us-central1" -environment = "dev" -``` +## Practical Walkthrough: Registering Existing GCP Infrastructure +### Prerequisites +- GCS bucket for artifacts +- Artifact Registry repository +- Service account for ML operations +- Vertex AI enabled for orchestration -**Sensitive Variables**: -Store sensitive variables in environment variables: -```bash -export TF_VAR_zenml_api_key="your-zenml-api-key" -export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) -``` +### Configuration Steps +1. **Variables Configuration**: Define variables in `variables.tf`. +2. **Main Configuration**: Set up providers and resources in `main.tf`. +3. **Outputs Configuration**: Specify outputs in `outputs.tf`. +4. **terraform.tfvars Configuration**: Create a `terraform.tfvars` file for sensitive variables. ### Usage Instructions 1. Initialize Terraform: @@ -15926,27 +15908,27 @@ export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ```bash terraform plan ``` -4. Apply the configuration: +4. Apply configuration: ```bash terraform apply ``` -5. Set the newly created stack as active: +5. Set the stack as active: ```bash zenml stack set $(terraform output -raw stack_name) ``` -6. Verify the configuration: +6. Verify configuration: ```bash zenml stack describe ``` -### Best Practices +## Best Practices - Use appropriate IAM roles and permissions. - Follow security practices for handling credentials. - Consider using Terraform workspaces for multiple environments. - Regularly back up Terraform state files. -- Version control Terraform configurations (excluding sensitive files). +- Version control Terraform configurations, excluding sensitive files. -For more information on the ZenML Terraform provider, visit the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). +For more information, refer to the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== @@ -15955,21 +15937,33 @@ For more information on the ZenML Terraform provider, visit the [ZenML provider # Summary: Best Practices for Using IaC with ZenML ## Overview -This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform. It addresses challenges such as supporting multiple teams, maintaining security, and enabling rapid iteration. +This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform, focusing on component-based architecture, environment management, resource isolation, and advanced stack management. + +## Key Challenges +- Supporting multiple ML teams with varying requirements. +- Operating across different environments (dev, staging, prod). +- Ensuring security and compliance. +- Facilitating rapid iteration without infrastructure bottlenecks. ## ZenML Approach -ZenML uses stack components as abstractions over infrastructure resources, facilitating a component-based architecture for reusability and consistency. +ZenML utilizes stack components as abstractions over infrastructure resources, allowing for consistency and reusability. ### Part 1: Stack Component Architecture -- **Problem**: Different teams require varied infrastructure configurations. -- **Solution**: Create reusable modules that correspond to ZenML stack components. +**Problem:** Different teams require varied infrastructure configurations. + +**Solution:** Implement a component-based architecture by creating reusable modules. Example Terraform code for base infrastructure: -**Base Infrastructure Example**: ```hcl -resource "random_id" "suffix" { - byte_length = 6 +# modules/zenml_stack_base/main.tf +terraform { + required_providers { + zenml = { source = "zenml-io/zenml" } + google = { source = "hashicorp/google" } + } } +resource "random_id" "suffix" { byte_length = 6 } + module "base_infrastructure" { source = "./modules/base_infra" environment = var.environment @@ -15979,24 +15973,39 @@ module "base_infrastructure" { } resource "zenml_service_connector" "base_connector" { - name = "${var.environment}-base-connector" - type = "gcp" + name = "${var.environment}-base-connector" + type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id - region = var.region + region = var.region service_account_json = module.base_infrastructure.service_account_key } } + +resource "zenml_stack_component" "artifact_store" { + name = "${var.environment}-artifact-store" + type = "artifact_store" + flavor = "gcp" + configuration = { path = "gs://${module.base_infrastructure.artifact_store_bucket}/artifacts" } + connector_id = zenml_service_connector.base_connector.id +} + +resource "zenml_stack" "base_stack" { + name = "${var.environment}-base-stack" + components = { + artifact_store = zenml_stack_component.artifact_store.id + } +} ``` -Teams can extend this base stack with specific components. +Teams can extend the base stack with specific components. ### Part 2: Environment Management and Authentication -- **Problem**: Different environments require tailored authentication and configurations. -- **Solution**: Use a flexible service connector setup that adapts to each environment. +**Problem:** Different environments require tailored authentication and configurations. + +**Solution:** Use environment-specific configurations with adaptable service connectors: -**Environment-Specific Connector Example**: ```hcl locals { env_config = { @@ -16006,24 +16015,21 @@ locals { } resource "zenml_service_connector" "env_connector" { - name = "${var.environment}-connector" - type = "gcp" + name = "${var.environment}-connector" + type = "gcp" auth_method = local.env_config[var.environment].auth_method dynamic "configuration" { for_each = try(local.env_config[var.environment].auth_configuration, {}) - content { - key = configuration.key - value = configuration.value - } + content { key = configuration.key; value = configuration.value } } } ``` ### Part 3: Resource Sharing and Isolation -- **Problem**: Ensuring data isolation and compliance across ML projects. -- **Solution**: Implement resource sharing with project isolation. +**Problem:** Need for strict isolation of resources across ML projects. + +**Solution:** Implement resource scoping with project isolation: -**Project Isolation Example**: ```hcl locals { project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}", recommendation = "projects/recommendation/${var.environment}" } @@ -16031,44 +16037,58 @@ locals { resource "zenml_stack_component" "project_artifact_stores" { for_each = local.project_paths - name = "${each.key}-artifact-store" - type = "artifact_store" + name = "${each.key}-artifact-store" + type = "artifact_store" configuration = { path = "gs://${var.shared_bucket}/${each.value}" } + connector_id = zenml_service_connector.env_connector.id +} + +resource "zenml_stack" "project_stacks" { + for_each = local.project_paths + name = "${each.key}-stack" + components = { artifact_store = zenml_stack_component.project_artifact_stores[each.key].id } } ``` ### Part 4: Advanced Stack Management Practices -1. **Stack Component Versioning**: +1. **Stack Component Versioning:** ```hcl - locals { stack_version = "1.2.0" } - resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } + locals { + stack_version = "1.2.0" + } + resource "zenml_stack" "versioned_stack" { + name = "stack-v${local.stack_version}" + } ``` -2. **Service Connector Management**: +2. **Service Connector Management:** ```hcl resource "zenml_service_connector" "env_connector" { - name = "${var.environment}-${var.purpose}-connector" + name = "${var.environment}-${var.purpose}-connector" auth_method = var.environment == "prod" ? "workload-identity" : "service-account" } ``` -3. **Component Configuration Management**: +3. **Component Configuration Management:** ```hcl locals { base_configs = { orchestrator = { location = var.region, project = var.project_id } } - env_configs = { dev = { orchestrator = { machine_type = "n1-standard-4" } }, prod = { orchestrator = { machine_type = "n1-standard-8" } } } + } + resource "zenml_stack_component" "configured_component" { + name = "${var.environment}-${var.component_type}" + configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) } ``` -4. **Stack Organization and Dependencies**: +4. **Stack Organization and Dependencies:** ```hcl module "ml_stack" { source = "./modules/ml_stack" - depends_on = [module.base_infrastructure, module.security] + depends_on = [module.base_infrastructure] } ``` -5. **State Management**: +5. **State Management:** ```hcl terraform { backend "gcs" { prefix = "terraform/state" } @@ -16076,7 +16096,7 @@ resource "zenml_stack_component" "project_artifact_stores" { ``` ## Conclusion -Utilizing ZenML and Terraform allows for the creation of a flexible, maintainable, and secure ML infrastructure. The ZenML provider streamlines the process while adhering to infrastructure-as-code best practices. Key recommendations include maintaining DRY configurations, consistent naming, and proper state management. +Utilizing ZenML and Terraform for ML infrastructure enables the creation of a flexible, maintainable, and secure environment. Following these best practices ensures a clean infrastructure codebase and aligns ML operations with infrastructure management. ================================================== @@ -16084,9 +16104,9 @@ Utilizing ZenML and Terraform allows for the creation of a flexible, maintainabl ### Integrate with Infrastructure as Code -**Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section covers how to integrate ZenML with popular IaC tools, specifically **Terraform**. +**Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section details how to integrate ZenML with popular IaC tools like **Terraform**. -For more information on IaC, visit [AWS: What is IaC](https://aws.amazon.com/what-is/iac). +For more information on IaC, visit [AWS IaC Overview](https://aws.amazon.com/what-is/iac). ![ZenML stack on Terraform Registry](../../../.gitbook/assets/terraform_providers_screenshot.png) @@ -16094,122 +16114,150 @@ For more information on IaC, visit [AWS: What is IaC](https://aws.amazon.com/wha === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === -### Summary of Azure Service Connector Documentation +### Summary of Azure Service Connector Documentation for ZenML -#### Overview -The ZenML Azure Service Connector enables authentication and access to Azure resources like Blob storage, AKS clusters, and ACR registries. It supports automatic configuration and credential detection via the Azure CLI. +The **Azure Service Connector** in ZenML enables authentication and access to various Azure resources, including Blob storage, AKS Kubernetes clusters, and ACR container registries. It supports automatic credential configuration via the Azure CLI and specialized authentication for different Azure services. -#### Key Features +#### Key Features: - **Resource Types**: - **Generic Azure Resource**: Connects to any Azure service using generic credentials. - - **Azure Blob Storage**: Requires IAM permissions for reading/writing blobs and listing accounts/containers. - - URI formats: `{az|abfs}://{container-name}` or `{container-name}`. - - Only supports service principal authentication. - - **AKS Kubernetes Cluster**: Requires permissions to list AKS clusters. - - URI formats: `[{resource-group}/]{cluster-name}` or `{cluster-name}`. - - **ACR Container Registry**: Requires permissions to pull/push images and list registries. - - URI formats: `[https://]{registry-name}.azurecr.io` or `{registry-name}`. + - **Azure Blob Storage**: Requires specific IAM permissions (e.g., `Storage Blob Data Contributor`). Supports URIs in formats like `az://{container-name}`. + - **AKS Kubernetes Cluster**: Requires permissions to list and fetch AKS credentials. Identified by resource group and cluster name. + - **ACR Container Registry**: Requires permissions to pull/push images. Identified by registry URI or name. -#### Authentication Methods -1. **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Disabled by default due to security risks. -2. **Service Principal**: Requires a client ID and secret for authentication. Recommended for production use. +#### Authentication Methods: +1. **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Requires enabling via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. +2. **Service Principal**: Involves client ID and secret for secure access. Recommended for production use. 3. **Access Token**: Temporary tokens that require regular updates. Not suitable for Azure Blob storage. -#### Configuration Commands +#### Configuration Commands: - **List Connector Types**: ```shell zenml service-connector list-types --type azure ``` -- **Register Service Connector** (Service Principal): + +- **Register Service Connector**: + - Implicit: + ```shell + zenml service-connector register azure-implicit --type azure --auth-method implicit --auto-configure + ``` + - Service Principal: ```shell zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` + - **Describe Service Connector**: ```shell zenml service-connector describe ``` -#### Local Client Provisioning -- Configure local Docker and Kubernetes CLIs using credentials from the Azure Service Connector. -- Example for Kubernetes: +#### Local Client Provisioning: +- The local Azure CLI, Kubernetes CLI, and Docker CLI can be configured with credentials from the Azure Service Connector. +- Example for Kubernetes CLI: ```shell - zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= + zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= ``` -#### Stack Components Usage -- Connect Azure Blob Storage, AKS, and ACR to ZenML Stack Components without manual credential management. -- Example of connecting an Azure Blob Storage Artifact Store: +#### Stack Components Usage: +- Connect Azure Blob Storage, AKS, and ACR to ZenML Stack Components using the Azure Service Connector. +- Example of registering and connecting components: ```shell zenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore + zenml orchestrator register aks-demo-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads + zenml container-registry register acr-demo-registry --flavor azure --uri= ``` -#### End-to-End Example -1. **Install Azure Integration**: - ```shell - zenml integration install -y azure - ``` -2. **Register Service Connector** with service principal. -3. **Connect Azure Blob Storage** and **Kubernetes Orchestrator** to the registered connector. -4. **Run a Simple Pipeline** to validate the setup. +#### End-to-End Example: +1. Set up an Azure service principal with necessary permissions. +2. Register a multi-type Azure Service Connector. +3. Connect Azure Blob Storage, AKS, and ACR to ZenML Stack Components. +4. Run a simple pipeline to validate the setup. -This documentation provides a comprehensive guide to configuring and using the Azure Service Connector with ZenML, detailing resource types, authentication methods, and practical commands for setup and usage. +This documentation provides a comprehensive guide to configuring and using the Azure Service Connector with ZenML, ensuring secure and efficient access to Azure resources. For the latest updates, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/service-connectors-guide.md === -# Service Connectors Guide Summary +# ZenML Service Connectors Guide Summary -This documentation provides a comprehensive guide on managing Service Connectors to connect ZenML with external resources. Key sections include: +This documentation provides a comprehensive guide to managing Service Connectors in ZenML, enabling connections to external resources. Key sections include terminology, types of Service Connectors, registration, and connecting Stack Components. -## Overview -- **Terminology**: Familiarize yourself with terms related to Service Connectors, including Service Connector Types, Resource Types, and Resource Names. -- **Service Connector Types**: Different implementations for connecting to various resources (e.g., AWS, GCP, Azure). Use commands like `zenml service-connector list-types` to explore available types. +## Key Sections -### Example Commands -```sh -zenml service-connector list-types -zenml service-connector describe-type aws -``` +1. **Terminology**: + - **Service Connector Types**: Define specific implementations for connecting to resources, detailing capabilities and authentication methods. + - **Resource Types**: Classify resources based on access protocols or vendors (e.g., `kubernetes-cluster`, `docker-registry`). + - **Resource Names**: Unique identifiers for resource instances (e.g., S3 bucket names). -## Resource Types -- Organizes resources into classes based on access protocols or vendors (e.g., `kubernetes-cluster`, `docker-registry`). -- Use `zenml service-connector list-types --resource-type kubernetes-cluster` to find applicable Service Connector Types. +2. **Service Connector Types**: + - Various built-in types (e.g., AWS, GCP, Kubernetes) support multiple authentication methods and resource types. + - Commands to explore types: + ```sh + zenml service-connector list-types + zenml service-connector describe-type + ``` -## Service Connectors -- Configure ZenML to authenticate and connect to external resources, storing necessary credentials. -- Can be **multi-type** (access multiple resource types) or **single-instance** (access one resource). +3. **Registering Service Connectors**: + - Connectors can be **multi-type** (access multiple resource types) or **single-instance** (access one resource). + - Example command to register a multi-type AWS Service Connector: + ```sh + zenml service-connector register aws-multi-type --type aws --auto-configure + ``` -### Example Command for Multi-Type Connector -```sh -zenml service-connector register aws-multi-type --type aws --auto-configure -``` +4. **Connecting Stack Components**: + - Use Service Connectors to link Stack Components to external resources. + - Example command to connect an artifact store: + ```sh + zenml artifact-store connect --connector + ``` + +5. **Verification**: + - Verify the configuration and credentials of Service Connectors to ensure access to resources. + - Example command for verification: + ```sh + zenml service-connector verify + ``` + +6. **Local Client Configuration**: + - Configure local CLI tools (e.g., `kubectl`, Docker) using credentials from Service Connectors. + - Example command to configure `kubectl`: + ```sh + zenml service-connector login --resource-type kubernetes-cluster --resource-id + ``` -## Security Practices -- Best practices for authentication methods are outlined, emphasizing the importance of using temporary credentials for production environments. +7. **Resource Discovery**: + - Discover accessible resources via Service Connectors using: + ```sh + zenml service-connector list-resources + ``` -## Connecting Stack Components -- Connect Stack Components to external resources using registered Service Connectors. -- Use interactive CLI mode for ease of connection. +8. **End-to-End Examples**: + - Detailed examples for AWS, GCP, and Azure Service Connectors are available for practical guidance. -### Example Command -```sh -zenml artifact-store connect -i -``` +## Important Commands -## Verification and Discovery -- Verify Service Connector configurations and credentials using `zenml service-connector verify `. -- Discover available resources with `zenml service-connector list-resources`. +- List Service Connector Types: + ```sh + zenml service-connector list-types + ``` -## End-to-End Examples -- Detailed examples are provided for AWS, GCP, and Azure Service Connectors to illustrate the complete process from registration to pipeline execution. +- Register a Service Connector: + ```sh + zenml service-connector register --type --auto-configure + ``` -### Example Command for Resource Discovery -```sh -zenml service-connector list-resources --resource-type s3-bucket -``` +- Verify a Service Connector: + ```sh + zenml service-connector verify + ``` -This summary encapsulates the essential information and commands needed to effectively manage Service Connectors in ZenML, ensuring users can connect to and utilize external resources efficiently. +- Connect Stack Components: + ```sh + zenml artifact-store connect --connector + ``` + +This guide is essential for efficiently managing connections between ZenML and external resources, ensuring secure and effective integration within machine learning workflows. For the latest documentation, refer to [ZenML Docs](https://docs.zenml.io). ================================================== @@ -16217,94 +16265,63 @@ This summary encapsulates the essential information and commands needed to effec ### Kubernetes Service Connector Overview -The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to generic clusters via pre-authenticated Kubernetes Python clients and local `kubectl` configuration. - -### Prerequisites +The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to generic clusters via pre-authenticated Kubernetes Python clients. It also facilitates local `kubectl` configuration. -- Install the connector: - - For only the Kubernetes Service Connector: +#### Prerequisites +- Install the Kubernetes Service Connector: + - For only the connector: ```shell pip install "zenml[connectors-kubernetes]" ``` - - For the entire Kubernetes ZenML integration: + - For the entire Kubernetes integration: ```shell zenml integration install kubernetes ``` -- Local `kubectl` configuration is not required for accessing Kubernetes clusters. +- Local `kubectl` configuration is not required for accessing clusters. -### Resource Types - -- Supports only `kubernetes-cluster` resource type, identified by a user-friendly name during registration. +#### Listing Service Connector Types +To list available service connector types: +```shell +zenml service-connector list-types --type kubernetes +``` -### Authentication Methods +#### Resource Types +- Supports generic Kubernetes clusters identified by the `kubernetes-cluster` resource type. +#### Authentication Methods 1. Username and password (not recommended for production). -2. Authentication token (with or without client certificates). For local K3D clusters, an empty token can be used. - -**Warning:** The Service Connector does not generate short-lived credentials; configured credentials are directly used for authentication. Use API tokens with client certificates when possible. +2. Authentication token (can be empty for local K3D clusters). -### Auto-configuration - -Fetch credentials from the local `kubectl` configuration during registration. Example command to register a service connector with auto-configuration: +**Warning**: The Service Connector does not generate short-lived credentials; use API tokens with client certificates when possible. +#### Auto-configuration +Fetch credentials from the local `kubectl` during registration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure ``` -**Example Output:** -``` -Successfully registered service connector `kube-auto` with access to: -┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ -┃ RESOURCE TYPE │ RESOURCE NAMES ┃ -┠───────────────────────┼────────────────┨ -┃ 🌀 kubernetes-cluster │ 35.185.95.223 ┃ -┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ -``` - -### Describe Service Connector - -To view details of a service connector: - +#### Describing a Service Connector +To describe a registered service connector: ```sh zenml service-connector describe kube-auto ``` -**Example Output:** -``` -Service connector 'kube-auto' of type 'kubernetes' with ID '4315e8eb...' is owned by user 'default' and is 'private'. -┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ -┃ PROPERTY │ VALUE ┃ -┠──────────────────┼──────────────────────────────────────┨ -┃ ID │ 4315e8eb... ┃ -┃ NAME │ kube-auto ┃ -┃ AUTH METHOD │ token ┃ -┃ RESOURCE NAME │ 35.175.95.223 ┃ -┃ CREATED_AT │ 2023-05-16 21:45:33.224740 ┃ -┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ -``` - -### Local Client Provisioning - -To configure the local Kubernetes client with credentials: - +#### Local Client Provisioning +Configure the local Kubernetes client with: ```sh zenml service-connector login kube-auto ``` -**Example Output:** -``` -Updated local kubeconfig with the cluster details. Current context set to '35.185.95.223'. -``` - -### Stack Components Usage +#### Stack Components Usage +The Kubernetes Service Connector can be utilized in Orchestrator and Model Deployer stack components, simplifying the management of Kubernetes workloads without explicit `kubectl` configurations. -The Kubernetes Service Connector can be utilized in Orchestrator and Model Deployer stack components, managing Kubernetes workloads without needing explicit `kubectl` configurations in the environment. +**Note**: Credentials discovered through the Service Connector may have limited lifetimes, particularly with third-party authentication providers. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md === -### HyperAI Service Connector Overview +### HyperAI Service Connector Documentation Summary The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. @@ -16313,93 +16330,97 @@ The ZenML HyperAI Service Connector enables authentication with HyperAI instance $ zenml service-connector list-types --type hyperai ``` -#### Supported Resource Types -- **HyperAI instances** +#### Connector Overview +| Name | Type | Resource Types | Auth Methods | Local | Remote | +|--------------------------|-----------|----------------------|-------------------|-------|--------| +| HyperAI Service Connector | 🤖 hyperai | 🤖 hyperai-instance | rsa-key | ✅ | ✅ | +| | | | dsa-key | | | +| | | | ecdsa-key | | | +| | | | ed25519-key | | | + +#### Prerequisites +To use the HyperAI Service Connector, install the integration: +```shell +zenml integration install hyperai +``` + +#### Resource Types +The connector supports HyperAI instances. #### Authentication Methods -ZenML establishes an SSH connection to HyperAI instances, supporting the following authentication methods: +SSH connections to HyperAI instances are established using: 1. RSA key 2. DSA key 3. ECDSA key 4. ED25519 key -**Important Note:** SSH private keys are distributed to all clients running pipelines, granting unrestricted access to HyperAI instances. +**Note:** SSH private keys are long-lived credentials granting unrestricted access to HyperAI instances. They will be distributed to all clients running pipelines. #### Configuration Requirements -- At least one `hostname` and a `username` are required for configuration. -- An optional `ssh_passphrase` can be provided. +When configuring the Service Connector, provide: +- At least one `hostname` +- `username` for login +- Optionally, an `ssh_passphrase` **Usage Options:** -1. Create separate service connectors for each HyperAI instance with different SSH keys. -2. Use a single SSH key for multiple instances, selecting the instance during HyperAI orchestrator component creation. +1. Create one connector per HyperAI instance with different SSH keys. +2. Use a single SSH key for multiple instances, selecting the instance during orchestrator component creation. -#### Auto-Configuration -This Service Connector does not support auto-discovery of authentication credentials. Feedback for this feature can be provided via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). +#### Auto-configuration +The Service Connector does not support auto-discovery of authentication credentials from HyperAI instances. Feedback on this feature is welcome via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). -#### Stack Component Usage +#### Stack Components Usage The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service-connector.md === -### Docker Service Connector Overview +### Summary of Docker Service Connector Documentation -The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients. It provides pre-authenticated `python-docker` clients for linked Stack Components. +The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients for these registries. It provides pre-authenticated `python-docker` clients to linked Stack Components. -#### Command to List Connector Types -```shell -zenml service-connector list-types --type docker -``` +#### Key Commands +- **List Connector Types:** + ```shell + zenml service-connector list-types --type docker + ``` + Output indicates the availability of the Docker Service Connector with authentication via password. + +#### Prerequisites +- No additional Python packages are needed; all are included in the ZenML package. +- Docker must be installed in environments where container images are built and pushed. -#### Supported Resource Types -- **Resource Type**: `docker-registry` -- **Registry Formats**: +#### Resource Types +- Supports Docker/OCI container registries identified by the `docker-registry` resource type. +- Formats for resource names: - DockerHub: `docker.io` or `https://index.docker.io/v1/` - OCI registry: `https://host:port/` #### Authentication Methods -Authentication is performed using a username and password or access token. API tokens are recommended over passwords. - -#### Registering a Docker Service Connector -```sh -zenml service-connector register dockerhub --type docker -in -``` - -**Example Command Output**: -```text -Please enter a name for the service connector [dockerhub]: -... -Successfully registered service connector `dockerhub` with access to: -┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ -┃ RESOURCE TYPE │ RESOURCE NAMES ┃ -┠────────────────────┼────────────────┨ -┃ 🐳 docker-registry │ docker.io ┃ -┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ -``` - -**Note**: Credentials configured in the Service Connector are distributed directly to clients and not short-lived. +- Authentication is via username and password or access token; using API tokens is recommended. +- **Register DockerHub Connector:** + ```sh + zenml service-connector register dockerhub --type docker -in + ``` + Prompts for service connector name, description, type, and authentication details. -#### Auto-configuration -The Service Connector does not support auto-discovery of authentication credentials from local Docker clients. Feedback can be provided via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). +#### Important Notes +- Credentials configured in the Service Connector are distributed directly to clients; short-lived credentials are not supported. +- Auto-discovery of credentials from local Docker clients is not available. #### Local Client Provisioning -To configure the local Docker client with credentials: -```sh -zenml service-connector login dockerhub -``` - -**Example Command Output**: -```text -WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. -... -The 'dockerhub' Docker Service Connector was used to configure the local client. -``` +- Configure local Docker client with: + ```sh + zenml service-connector login dockerhub + ``` + Warning about unencrypted password storage will be displayed. #### Stack Components Usage -The Docker Service Connector allows all Container Registry stack components to authenticate with remote Docker/OCI registries, enabling image building and publishing without explicit Docker credentials in the environment. +- The connector can be used by all Container Registry stack components to authenticate to remote registries, facilitating the building and publishing of container images without explicit Docker credentials in the environment. -**Warning**: ZenML does not currently support automatic Docker credential configuration in container runtimes like Kubernetes. This feature will be added in a future release. +#### Future Enhancements +- Automatic configuration of Docker credentials in container runtimes (e.g., Kubernetes) is planned for future releases. ================================================== @@ -16407,311 +16428,318 @@ The Docker Service Connector allows all Container Registry stack components to a ### Summary of GCP Service Connector Documentation -The **GCP Service Connector** in ZenML enables seamless authentication and access to various Google Cloud Platform (GCP) resources, including GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods, such as user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication, prioritizing security by issuing short-lived OAuth 2.0 tokens by default. +The **GCP Service Connector** in ZenML enables connection to various GCP resources like GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods, including GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication, with a focus on issuing short-lived OAuth 2.0 tokens for enhanced security. #### Key Features: -- **Resource Types**: - - **Generic GCP Resource**: Connects to any GCP service using OAuth 2.0 tokens. - - **GCS Bucket**: Requires specific permissions (e.g., `storage.buckets.list`, `storage.objects.create`). - - **GKE Cluster**: Requires permissions like `container.clusters.list`. - - **GAR/GCR Registry**: Supports both Artifact Registry and legacy GCR with specific permissions. - - **Authentication Methods**: - - **Implicit Authentication**: Automatically discovers credentials but is disabled by default due to security risks. - - **GCP User Account**: Generates temporary tokens from user credentials. - - **GCP Service Account**: Similar to user accounts but uses service account keys. - - **Service Account Impersonation**: Generates temporary tokens by impersonating another service account. - - **External Account (Workload Identity)**: Uses credentials from AWS IAM or Azure AD. - - **OAuth 2.0 Token**: Requires manual token management. + - **Implicit Authentication**: Uses Application Default Credentials (ADC) without explicit configuration. Requires enabling via environment variables. + - **User Account**: Uses long-lived credentials, generating temporary OAuth 2.0 tokens by default. + - **Service Account**: Requires a service account key JSON, generating temporary tokens. + - **Service Account Impersonation**: Generates temporary STS credentials by impersonating another service account. + - **External Account**: Utilizes GCP workload identity federation for authentication using AWS IAM or Azure AD credentials. + - **OAuth 2.0 Token**: Requires manual token management, suitable for short-term access. + +#### Resource Types: +- **Generic GCP Resource**: For general GCP service access. +- **GCS Bucket**: Requires specific permissions for accessing GCS. +- **GKE Cluster**: Requires permissions to list and get cluster details. +- **GAR/GCR Registry**: Supports both Google Artifact Registry and legacy Google Container Registry. #### Prerequisites: -- Install the GCP integration via: +- Install the GCP Service Connector via: ```bash pip install "zenml[connectors-gcp]" ``` - or + or ```bash zenml integration install gcp ``` +- GCP CLI installation is recommended for auto-configuration. -#### Commands: -- **List Service Connector Types**: +#### Example Commands: +- List available GCP Service Connector types: ```bash zenml service-connector list-types --type gcp ``` - -- **Register Service Connector**: +- Register a GCP Service Connector with auto-configuration: ```bash - zenml service-connector register --type gcp --auth-method --auto-configure + zenml service-connector register gcp-auto --type gcp --auto-configure ``` - -- **Describe Service Connector**: +- Verify a service connector: ```bash - zenml service-connector describe + zenml service-connector verify gcp-user-account --resource-type kubernetes-cluster ``` -- **Verify Access to Resources**: - ```bash - zenml service-connector verify --resource-type - ``` +#### Local Client Provisioning: +- The local `gcloud`, `kubectl`, and Docker CLI can be configured with credentials from the GCP Service Connector, allowing seamless access to GCP resources. -- **Login for Local Client Configuration**: - ```bash - zenml service-connector login --resource-type --resource-id - ``` +#### Stack Components Use: +- The GCP Service Connector can connect various ZenML stack components, including GCS Artifact Store, Kubernetes Orchestrator, and GCP Container Registry, facilitating a streamlined workflow without manual credential management. -#### Example End-to-End Workflow: -1. Configure local GCP CLI and install ZenML integration. -2. Register a multi-type GCP Service Connector. -3. Connect various Stack Components (e.g., GCS Artifact Store, GKE Orchestrator) to the registered connector. -4. Run a simple pipeline to validate the setup. +#### End-to-End Examples: +- **GKE Kubernetes Orchestrator**: Connects to a GKE cluster, GCS Artifact Store, and GCR using a multi-type GCP Service Connector. +- **VertexAI Orchestrator**: Uses individual service connectors for GCS, GCR, and Vertex AI resources. -This documentation provides comprehensive guidance on integrating ZenML with GCP services, ensuring secure and efficient access to cloud resources. +This documentation provides a comprehensive guide for configuring and utilizing the GCP Service Connector within ZenML, ensuring secure and efficient access to GCP resources. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/best-security-practices.md === -### Summary of Best Practices for Authentication Methods in Service Connectors +# Summary of Best Practices for Authentication Methods in Service Connectors -Service Connectors provide various authentication methods, particularly for cloud providers. While no unified authentication standard exists, identifiable patterns can guide the selection of authentication methods. - -#### Key Authentication Methods +## Overview +Service Connectors for cloud providers support various authentication methods. While no single standard exists, identifiable patterns can guide the selection of appropriate methods. Understanding these methods requires some knowledge of authentication and authorization. -1. **Username and Password** - - Avoid using primary account passwords for authentication. - - Use alternative credentials like session tokens or API keys when possible. - - Passwords should not be shared or used for automated workloads. +## Authentication Methods -2. **Implicit Authentication** - - Provides immediate access to cloud resources without configuration. - - Carries security risks, as it may grant access to resources configured for the ZenML Server. - - Disabled by default; can be enabled via environment variables or Helm chart settings. - - Utilizes locally stored credentials and environment variables. +### Username and Password +- **Avoid using primary account passwords** as credentials. Use alternatives like session tokens, API keys, or API tokens. +- Passwords are the least secure method and should not be shared or used for automated workloads. Cloud platforms typically require exchanging passwords for long-lived credentials. -3. **Long-lived Credentials (API Keys, Account Keys)** - - Preferred for production use, especially when sharing results. - - Cloud platforms do not use passwords directly; instead, they exchange them for long-lived credentials. - - Different cloud providers have their own terminology for these credentials (e.g., AWS Access Keys, GCP Service Account Credentials). +### Implicit Authentication +- Provides immediate access to cloud resources using locally stored credentials, configuration files, or environment variables. +- **Security Risk**: Can grant access to resources configured for the ZenML Server. Disabled by default; must be enabled via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. +- Works with cloud-specific metadata services (e.g., AWS EC2, GCP service accounts, Azure Managed Identity). -4. **Generating Temporary and Down-scoped Credentials** - - Temporary credentials are issued from long-lived credentials, enhancing security by limiting exposure. - - Down-scoped credentials restrict permissions to the minimum required for specific resources. +### Long-lived Credentials (API Keys, Account Keys) +- Preferred for production use, especially when sharing results. They are exchanged for temporary tokens or used with impersonation methods. +- Different cloud providers have various long-lived credential types (e.g., AWS Access Keys, GCP Service Account Credentials). +- **User Credentials**: Tied to human users; should not be shared. +- **Service Credentials**: Used for automated processes; better for sharing due to restricted permissions. -5. **Impersonating Accounts and Assuming Roles** - - Involves configuring Service Connectors with long-lived credentials tied to a primary account. - - Requires provisioning secondary accounts or roles with necessary permissions. - - The Service Connector exchanges long-lived credentials for short-lived tokens with restricted permissions. +### Generating Temporary and Down-scoped Credentials +- Temporary credentials are issued to clients, keeping long-lived credentials secure on the server. +- **Example**: AWS Service Connector can issue session tokens that expire after a set duration. -6. **Short-lived Credentials** - - Temporary credentials that expire, making them impractical for long-term use. - - Useful for granting temporary access without exposing long-lived credentials. +### Impersonating Accounts and Assuming Roles +- Requires setup of multiple accounts/roles for flexibility and control. +- Long-lived credentials are used to obtain short-lived tokens with specific permissions, enhancing security. -### Example Commands +### Short-lived Credentials +- Temporary credentials can be manually configured or generated during auto-configuration. +- Useful for granting temporary access without exposing long-lived credentials but can lead to Service Connector unusability upon expiration. -- **Registering a GCP Implicit Authentication Service Connector:** +## Example Commands +- **GCP Implicit Authentication**: ```sh zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core ``` -- **Registering an AWS Service Connector with Federation Token:** +- **AWS Long-lived Credentials**: ```sh zenml service-connector register aws-federation-multi --type aws --auth-method=federation-token --auto-configure ``` -- **Registering a GCP Service Connector with Account Impersonation:** +- **GCP Account Impersonation**: ```sh zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl ``` -### Conclusion -Choosing the right authentication method for Service Connectors is crucial for security and usability. Long-lived credentials, temporary tokens, and impersonation strategies provide flexibility while minimizing risks. Always consider the implications of each method on portability and security. +This summary encapsulates the essential best practices and technical details regarding authentication methods in Service Connectors, ensuring that critical information is preserved while maintaining conciseness. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === -### Summary of AWS Service Connector Documentation +### Summary of AWS Service Connector Documentation for ZenML -**Overview**: The AWS Service Connector in ZenML enables secure connections to AWS resources such as S3 buckets, EKS clusters, and ECR registries. It supports various authentication methods, including AWS secret keys, IAM roles, STS tokens, and implicit authentication. +**Overview**: The AWS Service Connector in ZenML allows seamless integration with AWS resources such as S3 buckets, EKS clusters, and ECR registries. It supports multiple authentication methods including AWS secret keys, IAM roles, STS tokens, and implicit authentication. #### Key Features: - **Authentication Methods**: - - **Implicit Authentication**: Uses environment variables or IAM roles; disabled by default for security. - - **AWS Secret Key**: Long-lived credentials; not recommended for production. - - **AWS STS Token**: Temporary tokens; requires manual renewal. - - **AWS IAM Role**: Assumes roles to generate temporary credentials. + - **Implicit Authentication**: Uses environment variables or IAM roles. Requires enabling via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. + - **AWS Secret Key**: Long-lived credentials for development; not recommended for production. + - **AWS STS Token**: Temporary tokens that require regular updates. + - **AWS IAM Role**: Generates temporary STS credentials by assuming a role. - **AWS Session Token**: Generates temporary tokens for IAM users. - **AWS Federation Token**: Generates tokens for federated users. - **Resource Types**: - - **Generic AWS Resource**: Connects to any AWS service using a boto3 session. - - **S3 Bucket**: Requires specific IAM permissions for S3 operations. - - **EKS Cluster**: Requires permissions to list and describe clusters. - - **ECR Registry**: Requires permissions for ECR operations. + - **Generic AWS Resource**: Connects to any AWS service using a pre-configured boto3 session. + - **S3 Bucket**: Requires specific IAM permissions (e.g., `s3:ListBucket`, `s3:GetObject`). + - **EKS Cluster**: Requires permissions like `eks:ListClusters`. + - **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories`. -#### Configuration and Usage: -- **Prerequisites**: Install ZenML with AWS integration using: +#### Configuration Commands: +- **List Connector Types**: ```shell - pip install "zenml[connectors-aws]" + zenml service-connector list-types --type aws ``` - or + +- **Register a Service Connector**: ```shell - zenml integration install aws + zenml service-connector register -i --type aws ``` -- **Registering a Service Connector**: - - List available AWS service connector types: - ```shell - zenml service-connector list-types --type aws - ``` - - Register a connector with auto-configuration: - ```shell - AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1 - ``` - -- **Local Client Configuration**: - - Configure local AWS CLI, Kubernetes, and Docker clients with credentials from the AWS Service Connector. - -#### Example Commands: -- **Verify Access**: +- **Verify Access to Resources**: ```shell - zenml service-connector verify aws-implicit --resource-type s3-bucket + zenml service-connector verify --resource-type ``` -- **Register Stack Components**: + +#### Auto-Configuration: +- Automatically fetches credentials from the AWS CLI. Use the following command: ```shell - zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles - zenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads - zenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com + AWS_PROFILE= zenml service-connector register --type aws --auto-configure ``` -- **Run a Simple Pipeline**: - ```python - from zenml import pipeline, step +#### Local Client Provisioning: +- Local AWS CLI, Kubernetes `kubectl`, and Docker CLI can be configured with credentials from the AWS Service Connector. The local AWS CLI profile is named based on the Service Connector UUID. - @step - def step_1() -> str: - return "world" +#### Example Workflow: +1. **Register Service Connector**: + ```shell + AWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure + ``` - @step(enable_cache=False) - def step_2(input_one: str, input_two: str) -> None: - print(f"{input_one} {input_two}") +2. **Register and Connect Stack Components**: + - **S3 Artifact Store**: + ```shell + zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles + zenml artifact-store connect s3-zenfiles --connector aws-demo-multi + ``` + - **Kubernetes Orchestrator**: + ```shell + zenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads + zenml orchestrator connect eks-zenml-zenhacks --connector aws-demo-multi + ``` + - **ECR Container Registry**: + ```shell + zenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com + zenml container-registry connect ecr-us-east-1 --connector aws-demo-multi + ``` - @pipeline - def my_pipeline(): - output_step_one = step_1() - step_2(input_one="hello", input_two=output_step_one) +3. **Run a Simple Pipeline**: + ```python + from zenml import pipeline, step - if __name__ == "__main__": - my_pipeline() - ``` + @step + def step_1() -> str: + return "world" + + @step(enable_cache=False) + def step_2(input_one: str, input_two: str) -> None: + print(f"{input_one} {input_two}") -### Important Notes: -- **MFA Restrictions**: The connector does not work with AWS roles that have Multi-Factor Authentication (MFA) enabled. -- **Security Recommendations**: Use IAM roles or temporary tokens for production environments to minimize security risks. + @pipeline + def my_pipeline(): + output_step_one = step_1() + step_2(input_one="hello", input_two=output_step_one) + + if __name__ == "__main__": + my_pipeline() + ``` -This summary encapsulates the essential information and commands for configuring and utilizing the AWS Service Connector with ZenML, ensuring that critical details are retained for effective use. +This summary captures the essential details of configuring and using the AWS Service Connector with ZenML while maintaining the integrity of the technical information. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === -### ZenML Service Connectors Overview +### Summary of ZenML Service Connectors Documentation -**Purpose**: ZenML Service Connectors enable seamless integration of ZenML deployments with various cloud providers and infrastructure services, simplifying authentication and authorization processes. +**Overview:** +ZenML facilitates the connection of MLOps pipelines to various cloud providers and infrastructure services (AWS, GCP, Azure, Kubernetes, etc.) through **Service Connectors**. These connectors simplify authentication and authorization, enhancing security and usability. -#### Key Concepts +**Key Points:** -- **MLOps Complexity**: MLOps platforms often require connections to multiple external services (e.g., AWS S3, Kubernetes). Managing credentials and permissions can be complex and error-prone. +- **Service Connectors** abstract the complexity of managing credentials and security best practices, allowing seamless connections to external resources without embedding sensitive information directly in code. -- **Service Connectors**: ZenML abstracts the complexity of authentication and authorization through Service Connectors, which act as intermediaries for secure access to resources. - -#### Use Case Example - -**Connecting to AWS S3**: -1. **Registering a Service Connector**: - - Use the AWS Service Connector to link ZenML to an S3 bucket. - - Command: - ```sh - zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket - ``` +- **Use Case Example:** Connecting ZenML to an AWS S3 bucket using the AWS Service Connector: + - Registering an S3 Artifact Store: + ```sh + zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME + ``` -2. **Connecting an Artifact Store**: - - Create and connect an S3 Artifact Store to the registered Service Connector: - ```sh - zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles - zenml artifact-store connect s3-zenfiles --connector aws-s3 - ``` +- **Alternatives to Service Connectors:** + 1. Embedding credentials directly in Stack Components (not recommended). + 2. Using ZenML secrets to store credentials. + 3. Referencing secrets in configurations (limited support across Stack Components). -#### Authentication Methods +- **Drawbacks of Alternatives:** + - Security risks from long-lived credentials. + - Portability issues with Kubernetes and SDK dependencies. + - Lack of validation for credentials during runtime. -Service Connectors support various authentication methods, including: -- **AWS**: Implicit, secret-key, STS-token, IAM-role, session-token, federation-token. -- **Resource Types**: Includes S3 buckets, Kubernetes clusters, Docker registries, etc. +- **Service Connector Benefits:** + - Credentials are validated and managed on the ZenML server. + - Generates short-lived credentials for client access, reducing security risks. + - Multiple Stack Components can share the same Service Connector. -**Example Command to List Service Connector Types**: -```sh -zenml service-connector list-types -``` +- **Finding Resource Types:** + To list available Service Connector types: + ```sh + zenml service-connector list-types + ``` -**Example Command to Describe AWS Service Connector**: -```sh -zenml service-connector describe-type aws -``` +- **Describing a Service Connector Type:** + Example for AWS: + ```sh + zenml service-connector describe-type aws + ``` -#### Security Considerations +- **Registering a Service Connector:** + To register an AWS Service Connector with auto-configuration: + ```sh + zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket + ``` -- **Best Practices**: Avoid embedding credentials directly in Stack Components. Use Service Connectors to manage credentials securely. -- **Temporary Credentials**: Service Connectors can generate short-lived credentials, reducing security risks associated with long-lived credentials. +- **Connecting Stack Components:** + To connect an S3 Artifact Store to a registered Service Connector: + ```sh + zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles + zenml artifact-store connect s3-zenfiles --connector aws-s3 + ``` -#### Example Pipeline +- **Example Pipeline:** + A simple ZenML pipeline can be defined as follows: + ```python + from zenml import step, pipeline -A simple pipeline can be defined and executed as follows: -```python -from zenml import step, pipeline + @step + def simple_step_one() -> str: + return "Hello World!" -@step -def simple_step_one() -> str: - return "Hello World!" + @step + def simple_step_two(msg: str) -> None: + print(msg) -@step -def simple_step_two(msg: str) -> None: - print(msg) + @pipeline + def simple_pipeline() -> None: + message = simple_step_one() + simple_step_two(msg=message) -@pipeline -def simple_pipeline() -> None: - message = simple_step_one() - simple_step_two(msg=message) + if __name__ == "__main__": + simple_pipeline() + ``` -if __name__ == "__main__": - simple_pipeline() -``` -Run the pipeline: -```sh -python run.py -``` +- **Security Best Practices:** ZenML emphasizes using temporary credentials and managing permissions effectively to enhance security. -### Conclusion +**Additional Resources:** +- [Service Connector Guide](./service-connectors-guide.md) +- [Security Best Practices](./best-security-practices.md) +- [AWS Service Connector](./aws-service-connector.md) +- [GCP Service Connector](./gcp-service-connector.md) +- [Azure Service Connector](./azure-service-connector.md) -ZenML Service Connectors streamline the process of connecting to various cloud resources while implementing security best practices. They abstract the complexities of authentication, making it easier to manage MLOps workflows. For further details, refer to the complete guide on Service Connectors and security best practices. +This summary encapsulates the essential details of ZenML's Service Connectors, focusing on their purpose, usage, and benefits while maintaining critical technical information. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/reference-secrets-in-stack-configuration.md === -### Summary: Referencing Secrets in Stack Configuration +### Summary: Reference Secrets in Stack Configuration -In ZenML, sensitive information such as passwords or tokens can be securely referenced in stack components using secret references. Instead of hard-coding values, you specify the secret name and key with the syntax: `{{.}}`. +**Overview**: This documentation explains how to securely reference secrets in ZenML stack components, which is essential for handling sensitive information like passwords and tokens. -#### Example: Registering and Using Secrets - -To register a secret and use it in an experiment tracker: +**Secret Reference Syntax**: Use the format `{{.}}` to reference secrets in stack component attributes. +**Example (CLI)**: ```shell -# Create a secret named `mlflow_secret` with username and password +# Create a secret for MLflow authentication zenml secret create mlflow_secret --username=admin --password=abc123 -# Reference the secret in the experiment tracker +# Register the experiment tracker with secret references zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ @@ -16719,18 +16747,14 @@ zenml experiment-tracker register mlflow \ ... ``` -#### Secret Validation - -ZenML validates the existence of referenced secrets and keys before running a pipeline to prevent runtime failures. The validation level can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - -- **NONE**: Disables validation. -- **SECRET_EXISTS**: Validates only the existence of secrets. -- **SECRET_AND_KEY_EXISTS**: (Default) Validates both the secret and key-value pair existence. - -#### Fetching Secret Values in Steps +**Validation**: ZenML validates the existence of referenced secrets before running a pipeline to prevent runtime failures. The validation level can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: +- `NONE`: Disables validation. +- `SECRET_EXISTS`: Validates only the existence of secrets. +- `SECRET_AND_KEY_EXISTS`: (default) Validates both the existence of secrets and their key-value pairs. -Secrets can be accessed within steps using the ZenML `Client` API: +**Fetching Secret Values in Steps**: For centralized secrets management, secrets can be accessed in steps using the ZenML `Client` API. +**Example (Python)**: ```python from zenml import step from zenml.client import Client @@ -16745,45 +16769,44 @@ def secret_loader() -> None: ) ``` -### Additional Resources - -- For more on managing secrets, refer to the [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md) documentation. +**Additional Resources**: For more details on managing secrets, refer to the [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md) documentation. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === -### Summary of Export Stack Requirements Documentation +### Export Stack Requirements -To obtain the `pip` requirements for a specific stack, use the following CLI command: +To obtain the `pip` requirements for a specific ZenML stack, use the following CLI command: ```bash -zenml stack export-requirements --output-file stack_requirements.txt +zenml stack export-requirements ``` -After exporting, install the requirements with: +For installation, it's recommended to output the requirements to a file and then install them: ```bash +zenml stack export-requirements --output-file stack_requirements.txt pip install -r stack_requirements.txt ``` -This process ensures that all necessary dependencies for the specified stack are captured and installed efficiently. +For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md === -# Custom Stack Component Flavor in ZenML +# Custom Stack Component Flavor Implementation in ZenML ## Overview -ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide explains the concept of flavors and how to implement custom ones. +ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide explains component flavors, core abstractions, and the steps to implement a custom flavor. ## Component Flavors - **Component Type**: A broad category defining functionality (e.g., `artifact_store`). -- **Flavor**: A specific implementation of a component type (e.g., `local`, `s3`). +- **Flavor**: Specific implementations of a component type (e.g., `local`, `s3`). ## Core Abstractions -1. **StackComponent**: Defines core functionality. Example: +1. **StackComponent**: Defines core functionality. ```python from zenml.stack import StackComponent @@ -16797,7 +16820,7 @@ ZenML allows for the creation of custom stack component flavors to tailor MLOps pass ``` -2. **StackComponentConfig**: Configures a stack component instance, separating configuration from implementation. +2. **StackComponentConfig**: Configures a stack component instance. ```python from zenml.stack import StackComponentConfig @@ -16806,7 +16829,7 @@ ZenML allows for the creation of custom stack component flavors to tailor MLOps SUPPORTED_SCHEMES: ClassVar[Set[str]] ``` -3. **Flavor**: Combines `StackComponent` and `StackComponentConfig`, defining the flavor's name and type. +3. **Flavor**: Combines the implementation and configuration classes. ```python from zenml.stack import Flavor @@ -16816,8 +16839,12 @@ ZenML allows for the creation of custom stack component flavors to tailor MLOps return "local" @property - def type(self) -> StackComponentType: - return StackComponentType.ARTIFACT_STORE + def config_class(self) -> Type[LocalArtifactStoreConfig]: + return LocalArtifactStoreConfig + + @property + def implementation_class(self) -> Type[LocalArtifactStore]: + return LocalArtifactStore ``` ## Implementing a Custom Flavor @@ -16830,20 +16857,26 @@ from zenml.utils.secret_utils import SecretField class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) - # Additional fields... + secret: Optional[str] = SecretField(default=None) ``` -### Step 2: Implement the Artifact Store -Implement the abstract methods. +### Step 2: Implement the Class +Implement the abstract methods using S3. ```python import s3fs from zenml.artifact_stores import BaseArtifactStore class MyS3ArtifactStore(BaseArtifactStore): + _filesystem: Optional[s3fs.S3FileSystem] = None + @property def filesystem(self) -> s3fs.S3FileSystem: - # Logic to initialize S3 filesystem - pass + if not self._filesystem: + self._filesystem = s3fs.S3FileSystem( + key=self.config.key, + secret=self.config.secret, + ) + return self._filesystem def open(self, path, mode="r"): return self.filesystem.open(path=path, mode=mode) @@ -16872,60 +16905,56 @@ class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): ``` ## Registering the Flavor -Use the ZenML CLI to register your flavor: +Use the ZenML CLI to register the new flavor. ```shell -zenml artifact-store flavor register flavors.my_flavor.MyS3ArtifactStoreFlavor +zenml artifact-store flavor register ``` ## Usage -Register the artifact store and stack: +After registration, use the flavor in your stacks: ```shell -zenml artifact-store register --flavor=my_s3_artifact_store --path='some-path' -zenml stack register --artifact-store +zenml artifact-store register \ + --flavor=my_s3_artifact_store \ + --path='some-path' + +zenml stack register \ + --artifact-store ``` ## Best Practices - Execute `zenml init` consistently. -- Use the CLI to check required configuration values. - Test flavors thoroughly before production use. - Keep code clean and well-documented. +- Reference existing flavors for development. ## Further Learning -For specific stack component types, refer to the following: -- [Orchestrator](../../../component-guide/orchestrators/custom.md) -- [Artifact Store](../../../component-guide/artifact-stores/custom.md) -- [Container Registry](../../../component-guide/container-registries/custom.md) -- [Step Operator](../../../component-guide/step-operators/custom.md) -- [Model Deployer](../../../component-guide/model-deployers/custom.md) -- [Feature Store](../../../component-guide/feature-stores/custom.md) -- [Experiment Tracker](../../../component-guide/experiment-trackers/custom.md) -- [Alerter](../../../component-guide/alerters/custom.md) -- [Annotator](../../../component-guide/annotators/custom.md) -- [Data Validator](../../../component-guide/data-validators/custom.md) +For specific stack components, refer to the ZenML documentation for detailed guides on implementing custom flavors for various component types. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === -### Summary: Deploy a Cloud Stack with Terraform +# Deploy a Cloud Stack with Terraform -ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to simplify the provisioning of cloud resources and integrate them with ZenML Stacks, enhancing machine learning infrastructure deployment. +ZenML provides [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks, enhancing machine learning infrastructure deployment. -#### Pre-requisites -- A deployed ZenML server instance accessible from the cloud provider. -- Create a service account and API key for the ZenML server using: +## Pre-requisites +- A reachable ZenML server instance (not local). +- Create a service account and API key for Terraform access: ```shell zenml service-account create ``` -- Install Terraform (version 1.9 or higher) and authenticate with your cloud provider's CLI. +- Required on the machine running Terraform: + - [Terraform](https://www.terraform.io/downloads.html) (version 1.9+). + - Authenticated with your cloud provider via its CLI/SDK. -#### Using Terraform Modules -1. Set up the ZenML Terraform provider with environment variables: +## Using Terraform Modules +1. Set up the ZenML provider with your server URL and API key using environment variables: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="" ``` -2. Create a Terraform configuration file (e.g., `main.tf`): +2. Create a `main.tf` configuration file: ```hcl terraform { required_providers { @@ -16949,250 +16978,296 @@ ZenML provides a collection of [Terraform modules](https://registry.terraform.io value = module.zenml_stack.zenml_stack_name } ``` -3. Run the following commands: +3. Run Terraform commands: ```shell terraform init terraform apply ``` 4. Confirm changes by typing `yes` when prompted. -5. After provisioning, use: +5. Use the created ZenML stack: ```shell zenml integration install zenml stack set ``` -#### Cloud Provider Specifics -- **AWS**: Requires AWS CLI configured with `aws configure`. Example configuration includes S3, ECR, and various orchestrators. -- **GCP**: Requires `gcloud` CLI configured with `gcloud init`. Configuration includes GCS, Google Artifact Registry, and orchestrators. -- **Azure**: Requires Azure CLI configured with `az login`. Configuration includes Azure Storage and Azure Container Registry. +## Cloud Provider Specifics + +### AWS +- **Authentication**: Install [AWS CLI](https://aws.amazon.com/cli/) and run `aws configure`. +- **Example Configuration**: + ```hcl + provider "aws" { region = "eu-central-1" } + ``` +- **Components**: S3 Artifact Store, ECR, and various orchestrators (local, SageMaker, SkyPilot). + +### GCP +- **Authentication**: Install [gcloud CLI](https://cloud.google.com/sdk/gcloud) and run `gcloud init`. +- **Example Configuration**: + ```hcl + provider "google" { region = "europe-west3"; project = "my-project" } + ``` +- **Components**: GCS Artifact Store, Google Artifact Registry, and orchestrators (local, Vertex AI, SkyPilot, Airflow). + +### Azure +- **Authentication**: Install [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/) and run `az login`. +- **Example Configuration**: + ```hcl + provider "azurerm" { features { resource_group { prevent_deletion_if_contains_resources = false } } } + ``` +- **Components**: Azure Storage Account, ACR, and orchestrators (local, SkyPilot, AzureML). -#### Cleanup -To remove resources provisioned by Terraform, run: +## Clean Up +To remove all provisioned resources and the ZenML stack: ```shell terraform destroy -``` +``` -This command will delete all resources and unregister the ZenML stack from the ZenML server. +This documentation provides a comprehensive overview of deploying a cloud stack using Terraform with ZenML, including prerequisites, configuration, and cloud provider specifics. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md === -# Deploy a Cloud Stack with a Single Click +# Deploy a Cloud Stack with ZenML -In ZenML, a **stack** represents your infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex and time-consuming. ZenML now offers a **1-click deployment feature** to simplify this process, allowing you to deploy infrastructure on your chosen cloud provider effortlessly. +ZenML allows you to deploy a cloud stack with a single click, simplifying the process of configuring your infrastructure. This feature is particularly useful for remote settings, where deploying infrastructure can be complex and time-consuming. -## Prerequisites -- A deployed instance of ZenML (not local via `zenml login --local`). For setup instructions, refer to the [ZenML deployment guide](../../../getting-started/deploying-zenml/README.md). +## Getting Started + +To use the 1-click deployment tool, you need a deployed instance of ZenML (not a local server). You can set up ZenML by following the [deployment guide](../../../getting-started/deploying-zenml/README.md). -## Using the 1-Click Deployment Tool +### Deployment Options + +You can deploy your stack via the ZenML dashboard or the CLI. + +#### Dashboard Deployment -### Dashboard Method 1. Navigate to the stacks page and click "+ New Stack". 2. Select "New Infrastructure". -3. Choose your cloud provider (AWS, GCP, or Azure) and configure the stack (region, name, etc.). -4. Follow the provider-specific instructions for deployment. +3. Choose your cloud provider (AWS, GCP, or Azure). -#### AWS -- Configure the stack and click "Deploy in AWS". -- Log in to AWS, review the CloudFormation configuration, and create the stack. +**AWS Deployment:** +- Select a region and stack name. +- Click "Deploy in AWS" to redirect to AWS CloudFormation. +- Log in, review, and create the stack. -#### GCP -- Configure the stack and click "Deploy in GCP". -- Log in to GCP Cloud Shell, review the scripts, authenticate, and run the deployment script. +**GCP Deployment:** +- Select a region and stack name. +- Click "Deploy in GCP" to start a Cloud Shell session. +- Review the ZenML GitHub repository and trust it. +- Authenticate with GCP, configure deployment, and run the provided script. -#### Azure -- Configure the stack and click "Deploy in Azure". -- Use Azure Cloud Shell to paste the `main.tf` file, then run `terraform init --upgrade` and `terraform apply`. +**Azure Deployment:** +- Select a location and stack name. +- Click "Deploy in Azure" to start a Cloud Shell session. +- Paste the `main.tf` content and run `terraform init --upgrade` and `terraform apply`. + +#### CLI Deployment + +Use the following command to deploy via CLI: -### CLI Method -To create a remote stack via CLI, use: ```shell zenml stack deploy -p {aws|gcp|azure} ``` -## Deployment Overview -### AWS Resources +### Infrastructure Overview + +**AWS Resources:** - S3 bucket (Artifact Store) -- ECR container registry (Container Registry) +- ECR (Container Registry) - CloudBuild project (Image Builder) -- IAM user/role with necessary permissions. +- IAM roles for SageMaker access -### GCP Resources +**GCP Resources:** - GCS bucket (Artifact Store) - GCP Artifact Registry (Container Registry) - Vertex AI (Orchestrator and Step Operator) -- GCP Service Account with necessary permissions. -### Azure Resources +**Azure Resources:** - Azure Resource Group - Azure Storage Account (Artifact Store) - Azure Container Registry (Container Registry) - AzureML Workspace (Orchestrator and Step Operator) -- Azure Service Principal with necessary permissions. -## Conclusion -With the 1-click deployment feature, you can quickly set up a cloud stack and begin running your pipelines in a remote environment. +### Permissions + +**AWS Permissions:** +- S3, ECR, CloudBuild, and SageMaker permissions for the IAM user and role. + +**GCP Permissions:** +- GCS, Artifact Registry, Vertex AI, and Cloud Build permissions for the GCP service account. + +**Azure Permissions:** +- Storage, Container Registry, and AzureML Workspace permissions for the Azure service principal. + +With this streamlined process, you can deploy a cloud stack and start running your pipelines in a remote environment with ease. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === -# Managing Stacks & Components in ZenML +### Managing Stacks & Components -## What is a Stack? -A **stack** in ZenML represents the configuration of infrastructure and tooling for executing pipelines. It consists of various components, each handling specific tasks, such as: -- **Container Registry**: For managing container images. -- **Kubernetes Cluster**: As an orchestrator for deploying models. +#### What is a Stack? +A **stack** in the ZenML framework represents the infrastructure and tooling configuration for executing pipelines. It consists of various components, each responsible for specific tasks, such as: +- **Container Registry**: For managing images. +- **Kubernetes Cluster**: Serves as an orchestrator. - **Artifact Store**: For storing artifacts. -- **Experiment Tracker**: E.g., MLflow for tracking experiments. +- **Experiment Tracker**: Like MLflow for tracking experiments. -## Organizing Execution Environments +#### Organizing Execution Environments ZenML allows running pipelines across multiple stacks, facilitating testing in different environments: - **Local Development**: Data scientists can experiment locally. -- **Staging**: Test advanced features in a cloud environment. -- **Production**: Deploy fully tested pipelines in a production-grade stack. +- **Staging**: Testing advanced features in a cloud environment. +- **Production**: Final deployment on a production-grade stack. -### Benefits of Separate Stacks -- Prevents accidental deployment of staging pipelines to production. +**Benefits**: +- Prevents accidental staging deployments to production. - Reduces costs by using less powerful resources in staging. -- Controls access by assigning permissions to specific stacks. - -## Managing Credentials -Most stack components require credentials for infrastructure interaction. ZenML uses **Service Connectors** to manage these credentials securely, abstracting sensitive information from users. - -### Recommended Roles -- Limit Service Connector creation to personnel with direct cloud resource access to minimize credential leaks and facilitate auditing. -- Create a connector for development/staging and another for production to ensure safe resource usage. - -### Recommended Workflow -1. Restrict Service Connector creation to a few trusted individuals. -2. Use a single connector for development/staging. -3. Create a separate connector for production when ready. - -## Deploying and Managing Stacks -Deploying MLOps stacks can be complex due to: -- Specific requirements for tools (e.g., Kubeflow needs a Kubernetes cluster). -- Difficulty in setting reasonable infrastructure parameters. -- Potential issues with standard tool installations (e.g., custom service accounts for Vertex AI). -- Need for proper permissions between components. -- Challenges in cleaning up resources post-experimentation. - -### Documentation for Stack Management -ZenML provides guidance on provisioning, configuring, and extending stacks: +- Controls access by restricting permissions to specific stacks. + +#### Managing Credentials +Most stack components require credentials to interact with infrastructure. The recommended method is using **Service Connectors**, which abstract sensitive information and enhance security. + +**Recommended Roles**: +- Limit Service Connector creation to individuals with direct cloud resource access to reduce credential leakage risk and simplify auditing. + +**Recommended Workflow**: +1. A small group creates Service Connectors. +2. Use one connector for development/staging. +3. Create a separate connector for production to avoid accidental resource usage. + +#### Deploying and Managing Stacks +Deploying MLOps stacks involves several challenges: +- Each tool has specific requirements (e.g., a Kubernetes cluster for Kubeflow). +- Setting default infrastructure parameters can be complex. +- Tools may require additional configurations for secure setups. +- Components must have appropriate permissions to communicate. +- Resource cleanup post-experimentation is crucial to avoid unnecessary costs. + +#### Key Documentation Links - [Deploy a Cloud Stack](./deploy-a-cloud-stack.md) - [Register a Cloud Stack](./register-a-cloud-stack.md) -- [Deploy with Terraform](./deploy-a-cloud-stack-with-terraform.md) -- [Export Stack Requirements](./export-stack-requirements.md) -- [Reference Secrets in Configuration](./reference-secrets-in-stack-configuration.md) -- [Implement Custom Stack Components](./implement-a-custom-stack-component.md) +- [Deploy a Cloud Stack with Terraform](./deploy-a-cloud-stack-with-terraform.md) +- [Export and Install Stack Requirements](./export-stack-requirements.md) +- [Reference Secrets in Stack Configuration](./reference-secrets-in-stack-configuration.md) +- [Implement a Custom Stack Component](./implement-a-custom-stack-component.md) -This documentation aims to simplify the process of managing complex stacks, making it easier to run ML pipelines effectively. +This documentation provides essential guidance for provisioning, configuring, and extending stacks and components in ZenML. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === -### Summary of ZenML Stack Wizard Documentation +### Summary of ZenML Stack Registration Documentation -**Overview:** -ZenML's stack represents the configuration of your infrastructure, typically requiring deployment and definition of stack components. The Stack Wizard simplifies the registration of a ZenML cloud stack using existing infrastructure, alleviating the challenges of remote setups. +**Overview**: ZenML's stack represents infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure and defining components in ZenML. The stack wizard simplifies this by allowing users to register a ZenML cloud stack using existing infrastructure. -**Deployment Options:** -- **1-Click Deployment Tool:** For users without existing infrastructure. -- **Terraform Modules:** For users wanting more control over resource provisioning. +**Deployment Options**: +- **1-Click Deployment Tool**: For users without existing infrastructure. +- **Terraform Modules**: For users who want to manage infrastructure as code. -**Using the Stack Wizard:** -Available via CLI and Dashboard. +### Using the Stack Wizard -1. **Dashboard:** - - Access the Stack Wizard through the stacks page. - - Click "+ New Stack" and select "Use existing Cloud." - - Choose your cloud provider and authentication method. - - Complete required fields for authentication. +**Access**: +- **Dashboard**: Available through the stacks page. Click "+ New Stack" and select "Use existing Cloud". +- **CLI**: Use the command: + ```shell + zenml stack register -p {aws|gcp|azure} + ``` -2. **CLI:** - - Command to register a remote stack: - ```shell - zenml stack register -p {aws|gcp|azure} - ``` - - Specify a service connector using `-sc `. - -**Authentication Methods:** -- **AWS:** - - Options include AWS Secret Key, STS Token, IAM Role, Session Token, Federation Token. -- **GCP:** - - Options include User Account, Service Account, External Account, OAuth 2.0 Token, Service Account Impersonation. -- **Azure:** - - Options include Service Principal and Access Token. - -**Defining Stack Components:** -You will define three main components: -- **Artifact Store** -- **Orchestrator** -- **Container Registry** - -For each component, you can: -- Reuse existing components. -- Create new components from available resources. +**Service Connector**: Required to register a cloud stack. You can use an existing connector or create a new one. + +**Auto-Configuration**: The wizard checks for existing credentials in the local environment and offers to use them or configure manually. + +### Authentication Methods by Cloud Provider + +#### AWS +- **Options**: + - AWS Secret Key + - AWS STS Token + - AWS IAM Role + - AWS Session Token + - AWS Federation Token +- **Required Fields**: Varies by method, typically includes `aws_access_key_id`, `aws_secret_access_key`, and `region`. + +#### GCP +- **Options**: + - GCP User Account + - GCP Service Account + - GCP External Account + - GCP OAuth 2.0 Token + - GCP Service Account Impersonation +- **Required Fields**: Includes `user_account_json` or `service_account_json`, and `project_id`. -**Final Steps:** -After defining the components, ZenML will create and register the stack, enabling you to run pipelines in a remote setting. +#### Azure +- **Options**: + - Azure Service Principal + - Azure Access Token +- **Required Fields**: Includes `client_secret`, `tenant_id`, and `client_id`. + +### Defining Stack Components +You will define three major components: +1. **Artifact Store** +2. **Orchestrator** +3. **Container Registry** + +For each component, you can choose to: +- Reuse existing components connected via the service connector. +- Create new components from available resources. -This documentation provides a streamlined approach to registering a cloud stack in ZenML, ensuring users can efficiently leverage existing infrastructure. +### Conclusion +Using the stack wizard, users can efficiently register a cloud stack and begin running pipelines in a remote setting. ================================================== === File: docs/book/how-to/control-logging/enable-or-disable-logs-storing.md === -### ZenML Logging Overview +# ZenML Logging Configuration -ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will log and store. +ZenML captures logs during step execution using a logging handler. Users can utilize the standard Python logging module or print statements, which ZenML will log and store. -#### Example Code +## Example Code ```python import logging from zenml import step @step def my_step() -> None: - logging.warning("`Hello`") # Using logging module - print("World.") # Using print statements + logging.warning("`Hello`") + print("World.") ``` -Logs are stored in the artifact store of your stack and can be viewed on the dashboard. However, logs will not be visible if not connected to a cloud artifact store with a service connector. For more details, refer to the [dashboard logging documentation](./view-logs-on-the-dasbhoard.md). - -### Disabling Log Storage +Logs are stored in the artifact store of your stack and can be viewed on the dashboard. Note: Logs are not visible if not connected to a cloud artifact store with a service connector. For more details, refer to the [log viewing documentation](./view-logs-on-the-dasbhoard.md). -To disable log storage, you can: - -1. Use the `enable_step_logs` parameter in the `@step` or `@pipeline` decorator: +## Disabling Log Storage +1. **Using Decorators**: + - Disable logging for a step: ```python - from zenml import pipeline, step - @step(enable_step_logs=False) def my_step() -> None: ... - + ``` + - Disable logging for an entire pipeline: + ```python @pipeline(enable_step_logs=False) def my_pipeline(): ... ``` -2. Set the environmental variable `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true`, which takes precedence over the above parameters. This variable must be set in the execution environment: +2. **Using Environment Variable**: + Set `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment. This variable takes precedence over decorator parameters. + + Example: ```python docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() + ``` - my_pipeline = my_pipeline.with_options( - settings={"docker": docker_settings} - ) - ``` - -This summary provides key information on logging in ZenML, including how to enable, disable, and view logs effectively. +This configuration allows users to manage log storage effectively based on their needs. ================================================== @@ -17200,42 +17275,42 @@ This summary provides key information on logging in ZenML, including how to enab # Viewing Logs on the Dashboard -ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will log and store. +ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will store. +## Example Code: ```python import logging from zenml import step @step def my_step() -> None: - logging.warning("`Hello`") # Using the logging module. - print("World.") # Using print statements. + logging.warning("`Hello`") # Use the logging module. + print("World.") # Use print statements as well. ``` -Logs are stored in the artifact store of your stack, accessible on the dashboard only if the ZenML server has direct access to it. This is true in two scenarios: - +Logs are stored in the artifact store of your stack and can be viewed on the dashboard only if the ZenML server has access to the artifact store. This is true in two scenarios: 1. **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. -2. **Deployed ZenML Server**: Logs from runs on a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. +2. **Deployed ZenML Server**: Logs from a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. -For configuration details, refer to the production guide on [remote artifact stores](../../user-guide/production-guide/remote-storage.md) and [service connectors](../../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md). +For configuration details, refer to the production guide on [remote artifact stores](../../user-guide/production-guide/remote-storage.md). -If logs are stored correctly, they will appear on the dashboard. +If logs are configured correctly, they will display on the dashboard. -**Note**: To disable log storage for performance or storage reasons, follow [these instructions](./enable-or-disable-logs-storing.md). +**Note**: To disable log storage due to performance or storage constraints, follow the provided instructions [here](./enable-or-disable-logs-storing.md). ================================================== === File: docs/book/how-to/control-logging/disable-rich-traceback.md === -### Disabling Rich Traceback Output in ZenML +### How to Disable Rich Traceback Output in ZenML -ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output, which aids in debugging. To disable this feature, set the following environment variable: +By default, ZenML utilizes the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output, aiding in debugging. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` -This change affects only local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the environment variable in the pipeline's environment: +This change will result in plain text traceback output. Note that this setting only affects local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the environment variable in the pipeline run environment: ```python docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) @@ -17250,50 +17325,51 @@ my_pipeline = my_pipeline.with_options( ) ``` -This ensures that both local and remote pipeline executions will display plain text traceback output. +For further details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/how-to/control-logging/disable-colorful-logging.md === -### How to Disable Colorful Logging in ZenML +### Disable Colorful Logging in ZenML -ZenML uses colorful logging by default for better readability. To disable this feature, set the following environment variable: +By default, ZenML enables colorful logging for enhanced readability. To disable this feature, set the following environment variable: ```bash ZENML_LOGGING_COLORS_DISABLED=true ``` -Setting this variable in the client environment (e.g., local machine) will disable colorful logging for remote pipeline runs as well. To disable it only locally while keeping it enabled for remote runs, set the variable in the pipeline run's environment: +Setting this variable in the client environment (e.g., local machine) will disable colorful logging for both local and remote pipeline runs. To disable it only locally while enabling it for remote runs, set the variable in the pipeline run environment: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) +# Add to the decorator @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() -# Alternatively, configure pipeline options +# Or configure pipeline options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) -``` +``` -This allows for flexible logging configurations based on the environment. +For more information, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/how-to/control-logging/set-logging-verbosity.md === -### Setting Logging Verbosity in ZenML +### Summary: Setting Logging Verbosity in ZenML -ZenML defaults to `INFO` logging verbosity. To change it, set the environment variable: +By default, ZenML logging verbosity is set to `INFO`. To change this, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` -Available levels: `INFO`, `WARN`, `ERROR`, `CRITICAL`, `DEBUG`. Note that changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To set logging verbosity for remote runs, configure it in the pipeline environment: +Available options include `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that setting this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To control logging verbosity for remote runs, set the variable in the pipeline's environment: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) @@ -17302,12 +17378,14 @@ docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG" def my_pipeline() -> None: my_step() -# Or configure options +# Alternatively, configure options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` +For further details, refer to the [latest ZenML documentation](https://docs.zenml.io). + ================================================== === File: docs/book/how-to/control-logging/README.md === @@ -17316,11 +17394,11 @@ my_pipeline = my_pipeline.with_options( ZenML generates different types of logs across various environments: -1. **ZenML Server Logs**: Produced by the ZenML server, similar to any FastAPI server. -2. **Client or Runner Logs**: Generated during pipeline execution, capturing events before, during, and after the pipeline run. -3. **Execution Environment Logs**: Created at the orchestrator level during the execution of each pipeline step, typically using Python's `logging` module. +1. **ZenML Server Logs**: Produced by the FastAPI server. +2. **Client or Runner Logs**: Generated during pipeline execution, capturing events before, during, and after a pipeline run. +3. **Execution Environment Logs**: Created at the orchestrator level while executing pipeline steps, typically using Python's `logging` module. -This section outlines how users can manage logging behavior across these environments. +This section outlines how users can manage logging behavior in these environments. ================================================== @@ -17331,15 +17409,25 @@ This section outlines how users can manage logging behavior across these environ This section outlines the management of data and artifacts within ZenML, focusing on key functionalities and processes. ## Key Concepts -- **Data Management**: Involves handling datasets throughout the machine learning lifecycle, ensuring data integrity and accessibility. -- **Artifact Management**: Refers to the storage and retrieval of artifacts generated during model training and evaluation, such as models, metrics, and visualizations. -## Core Features -- **Versioning**: ZenML supports version control for datasets and artifacts, allowing users to track changes and revert to previous states. -- **Storage Backends**: ZenML integrates with various storage solutions (e.g., S3, GCS) for efficient data and artifact storage. -- **Pipeline Integration**: Data and artifacts are seamlessly integrated into ZenML pipelines, enabling automated workflows. +- **Data Management**: Involves handling datasets used in machine learning workflows, ensuring they are accessible, versioned, and reproducible. + +- **Artifact Management**: Refers to the storage and retrieval of outputs generated during the ML pipeline, such as models, metrics, and visualizations. + +## Important Features + +1. **Versioning**: ZenML supports version control for datasets and artifacts, allowing users to track changes and revert to previous states. + +2. **Storage Backends**: ZenML integrates with various storage solutions (e.g., S3, GCS) for efficient data and artifact storage. + +3. **Data Validation**: Ensures the integrity and quality of datasets before processing, using built-in validation checks. + +4. **Artifact Tracking**: Automatically logs artifacts produced during pipeline execution, facilitating easy access and reproducibility. + +## Code Example + +Here’s a simplified example of how to manage data and artifacts in ZenML: -## Example Code Snippet ```python from zenml import pipeline @@ -17348,23 +17436,23 @@ def my_pipeline(): data = load_data() processed_data = preprocess(data) model = train_model(processed_data) - save_artifact(model, 'model.pkl') + save_artifact(model) + +# Execute the pipeline +my_pipeline.run() ``` -## Best Practices -- Regularly version datasets and artifacts to maintain a clear history. -- Choose appropriate storage backends based on project requirements. -- Use ZenML's built-in functions for loading and saving data and artifacts to ensure consistency. +## Conclusion -This summary provides an overview of data and artifact management in ZenML, emphasizing essential features and practices for effective usage. +Effective data and artifact management in ZenML enhances reproducibility and collaboration in machine learning projects, ensuring that all components are systematically organized and easily retrievable. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/disabling-visualizations.md === -### Disabling Visualizations +### Disabling Visualizations in ZenML -To disable artifact visualization, set `enable_artifact_visualization` to `False` at the pipeline or step level: +To disable artifact visualization in ZenML, set `enable_artifact_visualization` to `False` at the pipeline or step level: ```python @step(enable_artifact_visualization=False) @@ -17376,7 +17464,7 @@ def my_pipeline(): ... ``` -This configuration prevents visualizations from being generated for the specified step or pipeline. +For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -17384,8 +17472,8 @@ This configuration prevents visualizations from being generated for the specifie # Creating Custom Visualizations in ZenML -ZenML allows you to associate custom visualizations with artifacts using supported types: - +## Supported Visualization Types +ZenML supports the following visualization types: - **HTML:** Embedded HTML visualizations (e.g., data validation reports). - **Image:** Visualizations of image data (e.g., Pillow images). - **CSV:** Tables (e.g., pandas DataFrame output). @@ -17393,22 +17481,18 @@ ZenML allows you to associate custom visualizations with artifacts using support - **JSON:** JSON strings or objects. ## Methods to Add Custom Visualizations - -1. **Special Return Types:** Cast HTML, Markdown, CSV, or JSON data to specific classes within your step. -2. **Custom Materializers:** Define visualization logic for specific data types by creating a custom materializer. -3. **Custom Return Type Class:** Create a custom return type with a corresponding materializer for other visualizations. +1. **Special Return Types:** Cast HTML, Markdown, CSV, or JSON data to specific classes in your step. +2. **Custom Materializers:** Define visualization logic for data types by building a custom materializer. +3. **Custom Return Type Class:** Create a custom return type with a corresponding materializer. ### Visualization via Special Return Types - -You can return visualizations by casting strings to specific types: - +To visualize data, return the appropriate type from your step: - `zenml.types.HTMLString` - `zenml.types.MarkdownString` - `zenml.types.CSVString` - `zenml.types.JSONString` -#### Example: - +**Example: CSV Visualization** ```python from zenml.types import CSVString @@ -17417,10 +17501,7 @@ def my_step() -> CSVString: return CSVString("a,b,c\n1,2,3") ``` -### Visualizing Matplotlib Plots - -To visualize a matplotlib plot: - +**Example: Matplotlib Visualization** ```python import matplotlib.pyplot as plt import base64 @@ -17452,13 +17533,10 @@ if __name__ == "__main__": ``` ## Visualization via Materializers - -To visualize artifacts of a specific type, override the `save_visualizations()` method in a custom materializer. +To visualize all artifacts of a certain type, override the `save_visualizations()` method in a custom materializer. ### Example: Matplotlib Figure Visualization - 1. **Custom Class:** - ```python from pydantic import BaseModel @@ -17467,7 +17545,6 @@ class MatplotlibVisualization(BaseModel): ``` 2. **Materializer:** - ```python class MatplotlibMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MatplotlibVisualization,) @@ -17480,7 +17557,6 @@ class MatplotlibMaterializer(BaseMaterializer): ``` 3. **Step:** - ```python @step def create_matplotlib_visualization() -> MatplotlibVisualization: @@ -17491,13 +17567,12 @@ def create_matplotlib_visualization() -> MatplotlibVisualization: ``` ### Workflow - 1. The step creates and returns a `MatplotlibVisualization`. -2. ZenML invokes the `MatplotlibMaterializer` to save visualizations. +2. ZenML identifies the `MatplotlibMaterializer` and calls `save_visualizations()`. 3. The figure is saved as a PNG in the artifact store. 4. The dashboard displays the PNG when viewing the artifact. -For further examples, refer to the Hugging Face datasets materializer in the ZenML repository. +For further examples, refer to the Hugging Face datasets materializer in the ZenML GitHub repository. ================================================== @@ -17505,44 +17580,51 @@ For further examples, refer to the Hugging Face datasets materializer in the Zen ### Types of Visualizations in ZenML -ZenML automatically saves and displays visualizations for various data types in the ZenML dashboard. These visualizations can also be viewed in Jupyter notebooks using the `artifact.visualize()` method. +ZenML automatically saves visualizations for various data types, accessible via the ZenML dashboard or Jupyter notebooks using the `artifact.visualize()` method. -**Examples of Default Visualizations:** -- **Pandas DataFrame**: Statistical representation saved as a PNG image. -- **Drift Detection Reports**: Generated by tools like Evidently, Great Expectations, and whylogs. -- **Hugging Face Datasets Viewer**: Embedded as an HTML iframe. +**Default Visualizations Include:** +- Statistical representation of a Pandas DataFrame as a PNG image. +- Drift detection reports from: + - Evidently + - Great Expectations + - Whylogs +- A Hugging Face datasets viewer embedded as an HTML iframe. -Visualizations enhance data analysis and monitoring within ZenML workflows. +For more details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === -### Displaying Visualizations in the Dashboard +### Summary: Displaying Visualizations in the ZenML Dashboard To display visualizations on the ZenML dashboard, the following steps are necessary: -#### Configuring a Service Connector -Visualizations are typically stored with artifacts in the artifact store. To view these visualizations, the ZenML server must have access to the artifact store. For detailed guidance, refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md). For a specific example, see the [AWS S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). +1. **Service Connector Configuration**: + - Visualizations are stored in the artifact store. The ZenML server must have access to this store to display visualizations. + - Refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md) for configuration details. For AWS S3, see the [S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). + +2. **Local Artifact Store Limitation**: + - When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in visualizations not being displayed. Use a service connector with a remote artifact store for visualization access. -**Important Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in visualizations not being displayed. To view visualizations, use a service connector with a remote artifact store. +3. **Artifact Store Configuration**: + - If visualizations from a pipeline run are missing, check if the ZenML server has the necessary dependencies and permissions for the artifact store. More details can be found in the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). -#### Configuring Artifact Stores -If visualizations from a pipeline run do not appear on the dashboard, the ZenML server may lack the necessary dependencies or permissions for the artifact store. Refer to the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores) for further details. +For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === ---- icon: chart-scatter description: Configuring ZenML for data visualizations in the dashboard. --- +### ZenML Data Visualization Configuration -# Visualize Artifacts +**Overview**: This documentation covers how to configure ZenML for displaying data visualizations in its dashboard. -ZenML allows easy association of visualizations with data and artifacts. +**Visualizing Artifacts**: ZenML allows for easy association of visualizations with data artifacts. -![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) +![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) -For more information, refer to the ZenML documentation. +For more details, refer to the ZenML dashboard documentation. ================================================== @@ -17550,13 +17632,13 @@ For more information, refer to the ZenML documentation. ### Summary of ZenML Artifact Registration Documentation -This documentation outlines how to register external data as ZenML artifacts for future use, focusing on both folders and files, as well as managing model checkpoints during training with PyTorch Lightning. +This documentation explains how to register external data as ZenML artifacts for future use, focusing on registering folders, files, and model checkpoints from PyTorch Lightning training runs. #### Registering Existing Data 1. **Register Existing Folder as a ZenML Artifact**: - - You can register an entire folder containing data without needing to read or materialize the data. - - Example code: + - You can register an entire folder containing data as a ZenML artifact. + ```python import os from uuid import uuid4 @@ -17565,75 +17647,59 @@ This documentation outlines how to register external data as ZenML artifacts for from zenml import register_artifact prefix = Client().active_stack.artifact_store.path - folder_path = os.path.join(prefix, f"my_test_folder_{uuid4()}") - os.mkdir(folder_path) - with open(os.path.join(folder_path, "test_file.txt"), "w") as f: + preexisting_folder = os.path.join(prefix, f"my_test_folder_{uuid4()}") + os.mkdir(preexisting_folder) + with open(os.path.join(preexisting_folder, "test_file.txt"), "w") as f: f.write("test") - register_artifact(folder_path, name="my_folder_artifact") - loaded_path = Client().get_artifact_version("my_folder_artifact").load() - assert os.path.isdir(loaded_path) + register_artifact(folder_or_file_uri=preexisting_folder, name="my_folder_artifact") + temp_artifact_folder_path = Client().get_artifact_version(name_id_or_prefix="my_folder_artifact").load() ``` 2. **Register Existing File as a ZenML Artifact**: - - Similar to folders, individual files can also be registered. - - Example code: - ```python - import os - from uuid import uuid4 - from pathlib import Path - from zenml.client import Client - from zenml import register_artifact - - prefix = Client().active_stack.artifact_store.path - file_path = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") - with open(file_path, "w") as f: - f.write("test") + - You can also register a single file. - register_artifact(file_path, name="my_file_artifact") - loaded_file_path = Client().get_artifact_version("my_file_artifact").load() + ```python + preexisting_file = os.path.join(preexisting_folder, "test_file.txt") + register_artifact(folder_or_file_uri=preexisting_file, name="my_file_artifact") + temp_artifact_file_path = Client().get_artifact_version(name_id_or_prefix="my_file_artifact").load() ``` -#### Registering Checkpoints in PyTorch Lightning +#### Registering Checkpoints from PyTorch Lightning 1. **Register All Checkpoints**: - - During a training run, you can register all checkpoints created by PyTorch Lightning. - - Example code: + - Use the `ModelCheckpoint` callback to register all checkpoints during a training run. + ```python - from zenml.client import Client - from zenml import register_artifact from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint - from uuid import uuid4 - prefix = Client().active_stack.artifact_store.path - default_root_dir = os.path.join(prefix, uuid4().hex) - trainer = Trainer(default_root_dir=default_root_dir, callbacks=[ModelCheckpoint()]) + trainer = Trainer(callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1)]) trainer.fit(model) register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` 2. **Register Checkpoints as Separate Artifact Versions**: - - Extend the `ModelCheckpoint` callback to register each checkpoint as a separate artifact version. - - Example code: - ```python - from zenml import register_artifact - from pytorch_lightning.callbacks import ModelCheckpoint + - Extend the `ModelCheckpoint` to register each checkpoint as a separate artifact version. + ```python class ZenMLModelCheckpoint(ModelCheckpoint): - def __init__(self, artifact_name, *args, **kwargs): - # Initialization code... - super().__init__(*args, **kwargs) - def on_train_epoch_end(self, trainer, pl_module): super().on_train_epoch_end(trainer, pl_module) register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) ``` -#### Example Pipeline with PyTorch Lightning +#### Full Example: PyTorch Lightning Training with Checkpoint Linkage + +The documentation provides a complete example of a pipeline that trains a PyTorch Lightning model and registers checkpoints as artifacts. -- A complete example of a pipeline that trains a model and registers checkpoints: ```python +@step +def train_model(model: LightningModule, train_loader: DataLoader, epochs: int = 1, artifact_name: str = "my_model_ckpts"): + chkpt_cb = ZenMLModelCheckpoint(artifact_name=artifact_name) + trainer = Trainer(default_root_dir=chkpt_cb.default_root_dir, callbacks=[chkpt_cb]) + trainer.fit(model, train_loader) + @pipeline(model=Model(name="LightningDemo")) def train_pipeline(artifact_name: str = "my_model_ckpts"): train_loader = get_data() @@ -17642,7 +17708,7 @@ def train_pipeline(artifact_name: str = "my_model_ckpts"): predict(get_pipeline_context().model.get_artifact(artifact_name), after=["train_model"]) ``` -This documentation provides essential details on registering artifacts in ZenML, particularly focusing on external data and model checkpoints, ensuring that users can effectively manage their machine learning artifacts. +This example demonstrates how to integrate data loading, model training, and artifact registration into a cohesive pipeline using ZenML and PyTorch Lightning. ================================================== @@ -17650,20 +17716,20 @@ This documentation provides essential details on registering artifacts in ZenML, ### Structuring an MLOps Project -An MLOps project typically consists of multiple pipelines, including: - +#### Overview +An MLOps project typically consists of multiple pipelines, such as: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. -- **Inference Pipeline**: Runs batch predictions on trained models. -- **Deployment Pipeline**: Deploys trained models to production. +- **Inference Pipeline**: Runs predictions on trained models. +- **Deployment Pipeline**: Deploys models to production. -The structure of these pipelines can vary based on project requirements, and sharing artifacts (models, metadata) between them is essential. +The structure of these pipelines can vary based on project requirements, and sharing artifacts (models, datasets, metadata) between them is essential. #### Artifact Exchange Patterns -**Pattern 1: Artifact Exchange via `Client`** - -In this pattern, the ZenML Client facilitates the exchange of artifacts between pipelines. For example, a feature engineering pipeline produces datasets that are used in a training pipeline. +**Pattern 1: Artifact Exchange through `Client`** +- Use the ZenML Client to transfer artifacts between pipelines. +- Example: A feature engineering pipeline produces datasets that are fetched in the training pipeline. ```python from zenml import pipeline @@ -17671,7 +17737,6 @@ from zenml.client import Client @pipeline def feature_engineering_pipeline(): - dataset = load_data() train_data, test_data = prepare_data() @pipeline @@ -17683,11 +17748,11 @@ def training_pipeline(): model_evaluator(model, sklearn_classifier) ``` -*Note*: Artifacts are referenced, not materialized in memory during the pipeline function. +*Note*: The artifacts are references, not materialized in memory during the pipeline function. -**Pattern 2: Artifact Exchange via `Model`** - -This pattern uses the ZenML Model as a reference point. For instance, a training pipeline (`train_and_promote`) produces models that are promoted based on accuracy. An inference pipeline (`do_predictions`) retrieves the latest promoted model without needing to know specific artifact IDs. +**Pattern 2: Artifact Exchange through a `Model`** +- Use ZenML Models as references for artifacts. +- Example: A training pipeline (`train_and_promote`) produces models, which are then used in an inference pipeline (`do_predictions`). ```python from zenml import step, get_step_context @@ -17699,7 +17764,9 @@ def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return predictions ``` -If caching is enabled, it may lead to unexpected results. To avoid this, either disable caching or resolve artifacts at the pipeline level. +*Note*: Disabling caching is crucial to avoid unexpected behavior. + +Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model @@ -17721,31 +17788,30 @@ if __name__ == "__main__": do_predictions() ``` -Both artifact exchange patterns are valid; the choice depends on user preference and specific use cases. +Both approaches are valid; the choice depends on user preference. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/datasets.md === -# Custom Dataset Classes and Complex Data Flows in ZenML +# Summary of Custom Dataset Classes and Complex Data Flows in ZenML ## Overview -ZenML allows the encapsulation of data loading, processing, and saving logic through custom Dataset classes, facilitating the management of various data sources and complex data structures. +ZenML allows for the creation of custom Dataset classes to manage complex data flows and various data sources in machine learning projects. This documentation covers the implementation of these classes and their associated Materializers. ## Custom Dataset Classes -Custom Dataset classes are useful for: +Custom Dataset classes encapsulate data loading, processing, and saving logic. They are useful for: 1. Handling multiple data sources (e.g., CSV, databases). 2. Managing complex data structures. 3. Implementing custom data processing. ### Example Implementation -A base `Dataset` class can be implemented for different data sources, such as CSV and BigQuery: +A base `Dataset` class is defined, with specific implementations for CSV and BigQuery datasets: ```python from abc import ABC, abstractmethod import pandas as pd from google.cloud import bigquery -from typing import Optional class Dataset(ABC): @abstractmethod @@ -17753,253 +17819,216 @@ class Dataset(ABC): pass class CSVDataset(Dataset): - def __init__(self, data_path: str, df: Optional[pd.DataFrame] = None): + def __init__(self, data_path: str): self.data_path = data_path - self.df = df or pd.read_csv(self.data_path) + self.df = None + + def read_data(self) -> pd.DataFrame: + if self.df is None: + self.df = pd.read_csv(self.data_path) + return self.df class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): self.table_id = table_id - self.client = bigquery.Client(project=project) + self.project = project + self.client = bigquery.Client(project=self.project) def read_data(self) -> pd.DataFrame: - return self.client.query(f"SELECT * FROM `{self.table_id}`").to_dataframe() - + query = f"SELECT * FROM `{self.table_id}`" + return self.client.query(query).to_dataframe() + def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config).result() ``` ## Custom Materializers -Materializers handle serialization/deserialization of artifacts. Custom Materializers are crucial for custom Dataset classes. +Materializers handle the serialization and deserialization of artifacts. Custom Materializers are necessary for custom Dataset classes: -### Example Materializers +### CSVDataset Materializer ```python -from zenml.materializers import BaseMaterializer -from zenml.io import fileio -import json -import tempfile - class CSVDatasetMaterializer(BaseMaterializer): + ASSOCIATED_TYPES = (CSVDataset,) + def load(self, data_type: Type[CSVDataset]) -> CSVDataset: - with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: - with fileio.open(os.path.join(self.uri, "data.csv"), "rb") as source_file: - temp_file.write(source_file.read()) - return CSVDataset(temp_file.name) + # Load CSV data + dataset = CSVDataset(temp_path) + dataset.read_data() + return dataset def save(self, dataset: CSVDataset) -> None: - df = dataset.read_data() - with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: - df.to_csv(temp_file.name, index=False) - with open(temp_file.name, "rb") as source_file: - with fileio.open(os.path.join(self.uri, "data.csv"), "wb") as target_file: - target_file.write(source_file.read()) + # Save DataFrame to CSV + dataset.df.to_csv(temp_path, index=False) +``` +### BigQueryDataset Materializer +```python class BigQueryDatasetMaterializer(BaseMaterializer): + ASSOCIATED_TYPES = (BigQueryDataset,) + def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: - with fileio.open(os.path.join(self.uri, "metadata.json"), "r") as f: - metadata = json.load(f) - return BigQueryDataset(table_id=metadata["table_id"], project=metadata["project"]) + # Load metadata and create dataset + return BigQueryDataset(metadata["table_id"], metadata["project"]) def save(self, bq_dataset: BigQueryDataset) -> None: - metadata = {"table_id": bq_dataset.table_id, "project": bq_dataset.project} - with fileio.open(os.path.join(self.uri, "metadata.json"), "w") as f: - json.dump(metadata, f) + # Save metadata + json.dump(metadata, f) bq_dataset.write_data() ``` ## Pipeline Management -Design flexible pipelines to handle multiple data sources: +Designing flexible pipelines is essential when working with multiple data sources. Below is an example of an ETL pipeline: -### Example Pipeline ```python -from zenml import step, pipeline - -@step(output_materializer=CSVDatasetMaterializer) -def extract_data_local(data_path: str) -> CSVDataset: - return CSVDataset(data_path) - -@step(output_materializer=BigQueryDatasetMaterializer) -def extract_data_remote(table_id: str) -> BigQueryDataset: - return BigQueryDataset(table_id) - -@step -def transform(dataset: Dataset) -> pd.DataFrame: - return dataset.read_data().copy() # Apply transformations - @pipeline -def etl_pipeline(mode: str): +def etl_pipeline(mode: str = "develop"): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") - return transform(raw_data) + transformed_data = transform(raw_data) ``` ## Best Practices -1. **Common Base Class**: Use the `Dataset` base class for consistent handling of data sources. -2. **Specialized Steps**: Create separate steps for loading different datasets. -3. **Flexible Pipelines**: Use configuration parameters or conditional logic to adapt to data sources. -4. **Modular Design**: Create steps for specific tasks to promote code reuse and maintenance. +1. **Common Base Class**: Use a base `Dataset` class to standardize handling of data sources. +2. **Specialized Steps**: Implement distinct steps for loading different datasets while keeping processing steps uniform. +3. **Flexible Pipelines**: Use configuration parameters to adapt pipelines to various data sources. +4. **Modular Design**: Create steps that focus on specific tasks to promote code reuse and maintainability. -By following these practices, ZenML pipelines can efficiently manage complex data flows and multiple data sources, ensuring flexibility as project requirements evolve. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). +By following these practices, ZenML pipelines can efficiently manage complex data flows and adapt to evolving project requirements. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === -# Scaling Strategies for Big Data in ZenML +### Summary: Managing Big Data with ZenML -## Overview -This documentation outlines strategies for managing large datasets in ZenML, focusing on scaling pipelines as data sizes increase. It categorizes datasets into three sizes: small, medium, and large, and provides specific strategies for each category. +This documentation outlines strategies for scaling ZenML pipelines to handle large datasets in machine learning projects. It categorizes datasets by size and provides specific techniques for each category. -## Dataset Size Thresholds +#### Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory with pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. -## Strategies for Small Datasets -1. **Efficient Data Formats**: Use formats like Parquet instead of CSV. - ```python - import pyarrow.parquet as pq - - class ParquetDataset(Dataset): - def read_data(self) -> pd.DataFrame: - return pq.read_table(self.data_path).to_pandas() - ``` - -2. **Data Sampling**: Implement sampling methods. - ```python - class SampleableDataset(Dataset): - def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: - return self.read_data().sample(frac=fraction) - ``` +#### Strategies for Small Datasets +- **Efficient Data Formats**: Use formats like Parquet instead of CSV. + ```python + import pyarrow.parquet as pq -3. **Optimize Pandas Operations**: Use efficient operations to minimize memory usage. - ```python - @step - def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: - df['new_column'] = df['column1'] + df['column2'] - return df - ``` + class ParquetDataset(Dataset): + def read_data(self) -> pd.DataFrame: + return pq.read_table(self.data_path).to_pandas() + ``` -## Strategies for Medium Datasets -### Chunking for CSV Datasets -Implement chunking to process large files. -```python -class ChunkedCSVDataset(Dataset): - def read_data(self): - for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): - yield chunk +- **Data Sampling**: Implement sampling methods. + ```python + class SampleableDataset(Dataset): + def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: + return self.read_data().sample(frac=fraction) + ``` -@step -def process_chunked_csv(dataset: ChunkedCSVDataset) -> pd.DataFrame: - return pd.concat([process_chunk(chunk) for chunk in dataset.read_data()]) -``` +- **Optimize Pandas Operations**: Use efficient operations to minimize memory usage. + ```python + @step + def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: + df['new_column'] = df['column1'] + df['column2'] + return df + ``` -### Leveraging Data Warehouses -Utilize data warehouses like Google BigQuery for distributed processing. -```python -@step -def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: - client = bigquery.Client() - query = "SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" - query_job = client.query(query) - query_job.result() - return BigQueryDataset(table_id=result_table_id) -``` +#### Strategies for Medium Datasets +- **Chunking for CSV Datasets**: Process large files in chunks. + ```python + class ChunkedCSVDataset(Dataset): + def read_data(self): + for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): + yield chunk + ``` -## Strategies for Large Datasets -### Using Distributed Computing Frameworks -1. **Apache Spark**: - ```python - from pyspark.sql import SparkSession +- **Data Warehouses**: Use services like Google BigQuery for distributed processing. + ```python + @step + def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: + client = bigquery.Client() + query = "SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" + query_job = client.query(query) + return BigQueryDataset(table_id=result_table_id) + ``` - @step - def process_with_spark(input_data: str) -> None: - spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() - df = spark.read.csv(input_data, header=True) - df.groupBy("column1").agg({"column2": "mean"}).write.csv("output_path") - spark.stop() - ``` +#### Strategies for Very Large Datasets +- **Distributed Computing Frameworks**: Use Apache Spark, Ray, or Dask for large datasets. + + **Apache Spark Example**: + ```python + from pyspark.sql import SparkSession -2. **Ray**: - ```python - import ray + @step + def process_with_spark(input_data: str) -> None: + spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() + df = spark.read.format("csv").option("header", "true").load(input_data) + df.groupBy("column1").agg({"column2": "mean"}).write.csv("output_path") + spark.stop() + ``` - @step - def process_with_ray(input_data: str) -> None: - ray.init() - results = ray.get([process_partition.remote(part) for part in split_data(load_data(input_data))]) - save_results(combine_results(results), "output_path") - ray.shutdown() - ``` + **Ray Example**: + ```python + import ray -3. **Dask**: - ```python - import dask.dataframe as dd + @step + def process_with_ray(input_data: str) -> None: + ray.init() + results = ray.get([process_partition.remote(part) for part in partitions]) + ray.shutdown() + ``` - @step - def create_dask_dataframe(): - return dd.from_pandas(pd.DataFrame({'A': range(1000), 'B': range(1000, 2000)}), npartitions=4) + **Dask Example**: + ```python + import dask.dataframe as dd - @step - def process_dask_dataframe(df: dd.DataFrame) -> dd.DataFrame: - return df.map_partitions(lambda x: x ** 2) - ``` + @step + def create_dask_dataframe(): + return dd.from_pandas(pd.DataFrame({'A': range(1000)}), npartitions=4) + ``` -4. **Numba**: - ```python - from numba import jit + **Numba Example**: + ```python + from numba import jit - @jit(nopython=True) - def numba_function(x): - return x * x + 2 * x - 1 + @jit(nopython=True) + def numba_function(x): + return x * x + 2 * x - 1 + ``` - @step - def apply_numba_function(data: np.ndarray) -> np.ndarray: - return numba_function(data) - ``` +#### Important Considerations +- Ensure the execution environment has necessary frameworks installed. +- Manage resources effectively, especially with distributed frameworks. +- Implement error handling and cleanup for Spark and Ray. +- Consider data I/O methods for large datasets. -## Important Considerations -- **Environment Setup**: Ensure necessary frameworks are installed. -- **Resource Management**: Coordinate resource allocation between ZenML and the frameworks. -- **Error Handling**: Implement error handling for resource cleanup. -- **Data I/O**: Use intermediate storage for large datasets. -- **Scaling**: Ensure infrastructure supports the scale of computation. +#### Choosing the Right Scaling Strategy +- Start with simpler strategies for smaller datasets and scale up. +- Match processing complexity with the appropriate tools. +- Assess infrastructure and team expertise when selecting technologies. -## Choosing the Right Strategy -Consider dataset size, processing complexity, infrastructure, update frequency, and team expertise when selecting a scaling strategy. Start with simpler solutions and scale as needed. ZenML's architecture allows for evolving data processing strategies as projects grow. +By following these strategies, ZenML pipelines can efficiently manage datasets of any size, ensuring scalable machine learning workflows. For more details on custom Dataset classes, refer to the [custom dataset classes](datasets.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === -### Summary of Unmaterialized Artifacts in ZenML - -**Overview**: In ZenML, a pipeline is structured around steps that read and write artifacts to an artifact store. Materializers manage the serialization and deserialization of these artifacts. However, there are scenarios where you may want to skip materialization and use a reference to the artifact instead. - -**Warning**: Skipping materialization can lead to unintended consequences for downstream tasks that depend on materialized artifacts. Use this approach only when necessary. - -### Skipping Materialization - -To use an unmaterialized artifact, import `UnmaterializedArtifact` and specify it as the type in your step: - -```python -from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact -from zenml import step +### Summary of ZenML Documentation on Unmaterialized Artifacts -@step -def my_step(my_artifact: UnmaterializedArtifact): - pass -``` +**Overview:** +ZenML pipelines are data-centric, where steps are connected through their input and output artifacts. Materializers manage how artifacts are serialized and deserialized during this process. However, there are cases when you may want to skip materialization and use a reference to an artifact instead. -### Example Pipeline +**Warning:** Skipping materialization can lead to unintended consequences for downstream tasks. Use this feature only when necessary. -The following example demonstrates how to implement unmaterialized artifacts in a pipeline: +**Unmaterialized Artifacts:** +An unmaterialized artifact is represented by `zenml.materializers.UnmaterializedArtifact`, which includes a `uri` property pointing to the artifact's storage path. To use an unmaterialized artifact, specify `UnmaterializedArtifact` as the type in the step definition. +**Example Code:** ```python +from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact +from zenml import step, pipeline from typing_extensions import Annotated from typing import Dict, List, Tuple -from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact -from zenml import pipeline, step @step def step_1() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: @@ -18027,35 +18056,36 @@ def example_pipeline(): example_pipeline() ``` -### Key Points -- **UnmaterializedArtifact**: Allows access to the artifact's unique storage path via the `uri` property. -- **Pipeline Structure**: Steps can produce artifacts that are either materialized or unmaterialized, enabling flexibility in how artifacts are consumed. +**Pipeline Structure:** +- `s1` and `s2` produce identical artifacts. +- `s3` consumes materialized artifacts. +- `s4` consumes unmaterialized artifacts, accessing their paths directly via `dict_.uri` and `list_.uri`. -For further details, refer to the ZenML documentation on [data artifact management](../../../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). +For further details on using `UnmaterializedArtifact`, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === -It appears that the documentation text you intended to provide is missing. Please share the text you would like summarized, and I will be happy to assist you! +It seems that the documentation text you intended to provide is missing. Please share the text you would like summarized, and I'll be happy to assist you! ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === -### Summary of ZenML Artifact Management +# Summary of ZenML Artifact Loading Documentation -ZenML allows pipeline steps to consume artifacts produced by other steps or external sources. For external artifacts, use `ExternalArtifact`. When exchanging data between ZenML pipelines, late materialization is essential, as pipelines are compiled before execution, fixing input parameters. This allows for passing artifacts that may not yet exist. +ZenML pipelines typically consume artifacts produced by one another, but external data may also need to be integrated. For artifacts from non-ZenML sources, use `ExternalArtifact`. However, for exchanging data between ZenML pipelines, late materialization is essential. This allows passing artifacts that do not yet exist at the time of pipeline compilation. -#### Key Use Cases: +### Key Use Cases for Artifact Exchange: 1. Grouping data products using ZenML Models. -2. Using the ZenML Client to manage artifacts. +2. Utilizing the ZenML Client to manage artifacts. -**Recommendation:** Use models for grouping and accessing artifacts across pipelines. +**Recommendation:** Use models for grouping and accessing artifacts across pipelines. For loading artifacts from a ZenML Model, refer to the relevant documentation. -### Exchanging Artifacts with Client Methods +## Using Client Methods for Artifact Exchange -If not using the Model Control Plane, you can still exchange data with late materialization. Below is an example of a modified `do_predictions` pipeline: +If not using the Model Control Plane, late materialization can still facilitate data exchange. Below is a simplified version of the `do_predictions` pipeline code: ```python from typing import Annotated @@ -18071,7 +18101,6 @@ def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: flo @step def load_data() -> pd.DataFrame: - # load inference data ... @pipeline @@ -18080,6 +18109,7 @@ def do_predictions(): metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value + inference_data = load_data() predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) @@ -18087,25 +18117,24 @@ if __name__ == "__main__": do_predictions() ``` -### Explanation of Code: -- The `predict` step compares two models based on their MSE metrics and returns predictions from the better-performing model. +### Key Points: +- The `predict` step compares model performance using MSE metrics. - The `load_data` step is responsible for loading inference data. -- The `do_predictions` pipeline retrieves specific and latest artifact versions, ensuring that the latest data is used at execution time, not compilation time. - -This approach ensures that the most relevant model is used for predictions, enhancing the pipeline's effectiveness. +- Artifact retrieval via `Client().get_artifact_version()` is executed at runtime, ensuring the latest versions are used during execution rather than at compilation. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === -### Summary +### Summary of ZenML Documentation on Fetching Artifacts -Artifacts in ZenML can be accessed not only from direct upstream steps but also from other pipelines. This is facilitated by the ZenML client, which allows fetching of metadata and artifacts. +This documentation explains how to retrieve arbitrary artifacts in a ZenML step, emphasizing that not all artifacts must originate from direct upstream steps. #### Key Points: -- Artifacts can be retrieved using the ZenML client, enabling access to artifacts from various sources. -- The following code snippet demonstrates how to fetch an artifact within a step: +- Artifacts can be fetched from other upstream steps or different pipelines using the ZenML client. +- The metadata guide provides additional context on how to log and track metadata. +#### Example Code: ```python from zenml.client import Client from zenml import step @@ -18113,99 +18142,100 @@ from zenml import step @step def my_step(): client = Client() + # Fetch an artifact by name and version output = client.get_artifact_version("my_dataset", "my_version") accuracy = output.run_metadata["accuracy"].value ``` -- This method is beneficial for utilizing pre-existing artifacts stored in the artifact store, regardless of their origin. +This method allows access to pre-existing artifacts stored in the artifact store, facilitating the use of artifacts from various sources. #### Additional Resources: -- For more on managing artifacts, refer to the [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md) documentation, which includes information on the `ExternalArtifact` type and inter-step artifact passing. +- [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md): Information on the `ExternalArtifact` type and artifact transfer between steps. + +For the latest documentation, visit the [ZenML documentation site](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === -### Summary of Using Materializers in ZenML +### Summary: Using Materializers to Pass Custom Data Types in ZenML -**Overview**: ZenML pipelines are data-centric, where steps read and write artifacts to an artifact store. **Materializers** manage how artifacts are serialized, deserialized, and stored. +#### Overview +ZenML pipelines are structured around data-centric principles, where steps are connected through their inputs and outputs. **Materializers** are key components that manage how artifacts are serialized and deserialized when stored in the artifact store. #### Built-In Materializers -ZenML provides several built-in materializers for common data types, automatically enabled: -- **BuiltInMaterializer**: Handles `bool`, `float`, `int`, `str`, `None` - Storage: `.json` -- **BytesMaterializer**: Handles `bytes` - Storage: `.txt` -- **BuiltInContainerMaterializer**: Handles `dict`, `list`, `set`, `tuple` - Storage: Directory -- **NumpyMaterializer**: Handles `np.ndarray` - Storage: `.npy` -- **PandasMaterializer**: Handles `pd.DataFrame`, `pd.Series` - Storage: `.csv` or `.gzip` -- **PydanticMaterializer**: Handles `pydantic.BaseModel` - Storage: `.json` -- **ServiceMaterializer**: Handles `zenml.services.service.BaseService` - Storage: `.json` -- **StructuredStringMaterializer**: Handles `zenml.types.CSVString`, `HTMLString`, `MarkdownString` - Storage: `.csv`, `.html`, `.md` - -**Warning**: The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions. +ZenML includes several built-in materializers for common data types, which automatically handle serialization without user intervention. Here are some examples: + +| Materializer | Handled Data Types | Storage Format | +|--------------|---------------------|----------------| +| `BuiltInMaterializer` | `bool`, `float`, `int`, `str`, `None` | `.json` | +| `BytesMaterializer` | `bytes` | `.txt` | +| `NumpyMaterializer` | `np.ndarray` | `.npy` | +| `PandasMaterializer` | `pd.DataFrame`, `pd.Series` | `.csv` or `.gzip` | +| `PydanticMaterializer` | `pydantic.BaseModel` | `.json` | #### Integration Materializers -ZenML also offers integration-specific materializers activated by installing respective integrations. Examples include: -- **BentoMaterializer** for `bentoml.Bento` - Storage: `.bento` -- **DeepchecksResultMaterializer** for `deepchecks.CheckResult` - Storage: `.json` -- **LightGBMBoosterMaterializer** for `lgbm.Booster` - Storage: `.txt` +ZenML also provides integration-specific materializers that can be activated by installing the respective integration. Examples include: -**Note**: For Docker-based orchestrators, specify required integrations in `DockerSettings`. +| Integration | Materializer | Handled Data Types | Storage Format | +|-------------|--------------|---------------------|----------------| +| `bentoml` | `BentoMaterializer` | `bentoml.Bento` | `.bento` | +| `huggingface` | `HFDatasetMaterializer` | `datasets.Dataset` | Directory | #### Custom Materializers -To create a custom materializer: -1. **Define the Materializer**: - - Subclass `BaseMaterializer`. - - Set `ASSOCIATED_TYPES` and `ASSOCIATED_ARTIFACT_TYPE`. - - ```python - class MyMaterializer(BaseMaterializer): - ASSOCIATED_TYPES = (MyObj,) - ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA - - def load(self, data_type: Type[MyObj]) -> MyObj: - # Load logic - ... - - def save(self, my_obj: MyObj) -> None: - # Save logic - ... - ``` - -2. **Configure Steps**: - - Use `@step(output_materializers=MyMaterializer)` or `.configure()` method. - - ```python - @step(output_materializers=MyMaterializer) - def my_first_step() -> MyObj: - return MyObj("my_object") - ``` - -3. **Global Materializer**: To apply a custom materializer globally, register it in the materializer registry. - - ```python - materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) - ``` - -#### Example Implementation -Here’s a simple example of using a custom materializer: +To use custom data types, you can define a custom materializer by subclassing `BaseMaterializer`. You need to specify `ASSOCIATED_TYPES` and implement `load()` and `save()` methods. +**Example:** ```python +from zenml.materializers.base_materializer import BaseMaterializer +from zenml.enums import ArtifactType + class MyObj: - def __init__(self, name: str): - self.name = name + ... class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: - with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: - return MyObj(f.read()) + with self.artifact_store.open('data.txt', 'r') as f: + name = f.read() + return MyObj(name=name) def save(self, my_obj: MyObj) -> None: - with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: + with self.artifact_store.open('data.txt', 'w') as f: f.write(my_obj.name) +``` + +#### Configuring Steps and Pipelines +You can configure which materializer to use at the step level: +```python +@step(output_materializers=MyMaterializer) +def my_first_step() -> MyObj: + return MyObj("my_object") +``` + +For multiple outputs, use a dictionary: +```python +@step(output_materializers={"1": MyMaterializer1, "2": MyMaterializer2}) +def my_first_step() -> Tuple[Annotated[MyObj1, "1"], Annotated[MyObj2, "2"]]: + return MyObj1(), MyObj2() +``` + +You can also define materializers globally for all pipelines: +```python +materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) +``` +#### Implementing Materializer Methods +- **`load(data_type)`**: Reads data from the artifact store. +- **`save(data)`**: Writes data to the artifact store. +- **`save_visualizations(data)`**: Optionally saves visualizations. +- **`extract_metadata(data)`**: Optionally extracts metadata. + +#### Example Pipeline +Here’s how to implement a simple pipeline using a custom materializer: +```python @step def my_first_step() -> MyObj: return MyObj("my_object") @@ -18222,36 +18252,28 @@ def first_pipeline(): first_pipeline() ``` -This example demonstrates the creation of a custom materializer for a class `MyObj`, allowing it to be passed between pipeline steps without warnings. - -#### Additional Features -- **Visualizations**: Override `save_visualizations()` to save visual representations of artifacts. -- **Metadata Extraction**: Override `extract_metadata()` to track custom metadata alongside artifacts. - -#### Important Considerations -- Ensure compatibility with custom artifact stores. -- Use `get_temporary_directory()` for temporary directories in materializers. -- Disable artifact visualization or metadata extraction if not needed. - -This summary provides a concise overview of using materializers in ZenML, covering built-in options, custom implementations, and integration specifics. +#### Conclusion +Custom materializers in ZenML allow for robust handling of custom data types, ensuring that artifacts are serialized and deserialized correctly across steps in a pipeline. Proper implementation of these materializers enhances the reliability and flexibility of data workflows in ZenML. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/delete-an-artifact.md === -### Deleting Artifacts in ZenML +### Summary: Deleting Artifacts in ZenML -Artifacts cannot be deleted directly to avoid breaking the ZenML database with dangling references. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: +To delete artifacts in ZenML, direct deletion is not supported due to potential database integrity issues. However, you can remove artifacts that are no longer referenced by any pipeline runs using the following command: ```shell zenml artifact prune ``` -By default, this command removes artifacts from the underlying artifact store and deletes their database entries. You can modify this behavior with the following flags: +This command deletes artifacts from the artifact store and the database entry by default. You can modify this behavior with the following flags: - `--only-artifact`: Deletes only the artifact. -- `--only-metadata`: Deletes only the database entry. +- `--only-metadata`: Deletes only the metadata. -If you encounter errors while pruning (often due to locally stored artifacts that no longer exist), you can use the `--ignore-errors` flag to continue the process, though warning messages will still be displayed. +If you encounter errors while pruning artifacts (often due to local storage issues), you can bypass these errors by adding the `--ignore-errors` flag. Note that warning messages will still be displayed during the process. + +For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== @@ -18259,38 +18281,39 @@ If you encounter errors while pruning (often due to locally stored artifacts tha ### ZenML Data Storage Overview -ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them via a dashboard, enhancing insights, reproducibility, and reliability in machine learning workflows. +ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them through a dashboard, enhancing insights and reproducibility in machine learning workflows. #### Artifact Creation and Caching +- Each pipeline run generates a new directory in the artifact store for each step, based on changes in inputs, outputs, parameters, or configurations. +- If a step is new or modified, ZenML creates a unique directory structure with a new ID. If unchanged, it may cache the step to save time and resources, allowing focus on experimentation. +- ZenML enables tracing artifacts back to their origins, ensuring reproducibility and identifying potential bottlenecks in pipelines. -- Each pipeline run checks for changes in inputs, outputs, parameters, or configurations. -- New or modified steps create a unique directory in the [Artifact Store](../../../component-guide/artifact-stores/artifact-stores.md) with a unique ID. -- Unchanged steps may be cached, saving time and computational resources, allowing focus on new configurations. -- ZenML provides traceability of artifacts back to their origins, crucial for identifying issues and ensuring reproducibility, especially in team environments. - -For detailed artifact management, see [artifact versioning and configuration](../../../user-guide/starter-guide/manage-artifacts.md). +For artifact management details, refer to the [artifact versioning and configuration documentation](../../../user-guide/starter-guide/manage-artifacts.md). #### Materializers +Materializers handle the serialization and deserialization of artifacts, ensuring consistent storage and retrieval. They store data in unique directories within the artifact store and can be customized for specific data types or storage systems. -Materializers are essential for artifact management, handling serialization and deserialization of artifacts. They store data in unique directories within the artifact store. +- ZenML provides built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. +- Users can create custom materializers by extending the `BaseMaterializer` class. -- ZenML includes built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. -- Custom materializers can be created by extending the `BaseMaterializer` class. +**Important Note:** The built-in `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions and potential security risks from executing arbitrary code. For robust applications, consider developing a custom materializer. -**Warning:** The built-in [CloudpickleMaterializer](https://sdkdocs.zenml.io/latest/core_code_docs/core-materializers/#zenml.materializers.cloudpickle_materializer.CloudpickleMaterializer) is not production-ready due to potential compatibility issues across Python versions and security risks from malicious file uploads. Custom materializers are recommended for robust use cases. +#### Example +When a pipeline runs, ZenML uses materializers to save and load artifacts via the ZenML `fileio` system, facilitating artifact caching and lineage tracking. An example of a default materializer (the `numpy` materializer) can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). -During pipeline execution, ZenML employs materializers to manage artifact saving and loading through the `fileio` system, facilitating artifact caching and lineage tracking. An example of a default materializer, the `numpy` materializer, can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). +For further details on materializers, refer to the [materializers documentation](handle-custom-data-types.md). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md === -### Summary of Documentation on Using `Annotated` for Multiple Outputs - -The `Annotated` type in ZenML allows you to return multiple outputs from a step with specific names, enhancing artifact retrieval and dashboard readability. +### Summary of ZenML Documentation on Returning Multiple Outputs -#### Code Example +**Purpose:** The `Annotated` type in ZenML allows users to return multiple outputs from a step, each with a specific name for easy retrieval and improved dashboard readability. +**Key Points:** +- **Functionality:** Using `Annotated`, you can name outputs of a step, aiding in artifact retrieval and enhancing pipeline dashboard clarity. +- **Example Code:** ```python from typing import Annotated, Tuple import pandas as pd @@ -18308,91 +18331,68 @@ def clean_data(data: pd.DataFrame) -> Tuple[ y = data["target"] return train_test_split(x, y, test_size=0.2, random_state=42) ``` +- **Functionality Breakdown:** + - The `clean_data` function takes a `DataFrame` and returns a tuple containing training and testing sets for features and target variables. + - Outputs are annotated for clarity, making it easier to identify them in the pipeline. -#### Key Points -- The `clean_data` function takes a pandas DataFrame and returns a tuple containing training and testing datasets for features (`x_train`, `x_test`) and target (`y_train`, `y_test`). -- Each output is annotated with a name using `Annotated`, facilitating easy identification and retrieval in the pipeline. -- The function uses `train_test_split` from scikit-learn to perform the data split. - -This approach improves the clarity of the pipeline's dashboard by displaying named outputs. +This concise usage of `Annotated` enhances the organization and usability of outputs in ZenML pipelines. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md === -### Organizing Data with Tags in ZenML +### ZenML Tagging Documentation Summary -ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow efficiency and asset discoverability. This guide covers how to assign tags to artifacts and models. +**Overview**: ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow and discoverability. #### Assigning Tags to Artifacts +- **Python SDK**: Use the `tags` property of `ArtifactConfig` to assign tags to artifacts. + ```python + from zenml import step, ArtifactConfig -To tag artifact versions in a step or pipeline, use the `tags` property of `ArtifactConfig`: - -**Python SDK Example:** -```python -from zenml import step, ArtifactConfig - -@step -def training_data_loader() -> ( - Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] -): - ... -``` - -**CLI Example:** -```shell -# Tag the artifact -zenml artifacts update iris_dataset -t sklearn - -# Tag the artifact version -zenml artifacts versions update iris_dataset raw_2023 -t sklearn -``` + @step + def training_data_loader() -> ( + Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] + ): + ... + ``` -Tags like `sklearn` and `pre-training` will be assigned to all artifacts created by the step. ZenML Pro users can tag artifacts directly in the cloud dashboard. +- **CLI**: Use the `zenml artifacts` command to tag artifacts. + ```shell + zenml artifacts update iris_dataset -t sklearn + zenml artifacts versions update iris_dataset raw_2023 -t sklearn + ``` #### Assigning Tags to Models +- Tags can be added to models as key-value pairs when creating a model version. + ```python + from zenml.models import Model -Models can also be tagged for semantic organization. Tags can be added as key-value pairs when creating a model version: - -**Python SDK Example:** -```python -from zenml.models import Model - -# Define tags -tags = ["experiment", "v1", "classification-task"] - -# Create a model version with tags -model = Model(name="iris_classifier", version="1.0.0", tags=tags) - -@pipeline(model=model) -def my_pipeline(...): - ... -``` - -You can also create or register models and their versions with tags: + tags = ["experiment", "v1", "classification-task"] + model = Model(name="iris_classifier", version="1.0.0", tags=tags) -```python -from zenml.client import Client + @pipeline(model=model) + def my_pipeline(...): + ... + ``` -# Create a new model with tags -Client().create_model(name="iris_logistic_regression", tags=["classification", "iris-dataset"]) +- **Creating/Updating Models**: Use the Client to create or register models with tags. + ```python + from zenml.client import Client -# Create a new model version with tags -Client().create_model_version(model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"]) -``` + Client().create_model(name="iris_logistic_regression", tags=["classification", "iris-dataset"]) + Client().create_model_version(model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"]) + ``` -**CLI Example for Existing Models:** -```shell -# Tag an existing model -zenml model update iris_logistic_regression --tag "classification" +- **CLI for Existing Models**: Use the following commands to tag existing models and versions. + ```shell + zenml model update iris_logistic_regression --tag "classification" + zenml model version update iris_logistic_regression 2 --tag "experiment3" + ``` -# Tag a specific model version -zenml model version update iris_logistic_regression 2 --tag "experiment3" -``` +**Note**: During pipeline runs, models may be created implicitly without tags. Users can manage tags via the SDK or ZenML Pro UI. -#### Important Notes -- During a pipeline run, a model may be implicitly created without tags from the `Model` class. -- Tags can be manipulated using the SDK or the ZenML Pro UI. +For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -18400,19 +18400,19 @@ zenml model version update iris_logistic_regression 2 --tag "experiment3" ### ZenML Artifact Naming Overview -In ZenML pipelines, managing artifact names is crucial for tracking outputs, especially when steps are reused with different inputs. ZenML allows both static and dynamic naming strategies for output artifacts, utilizing type annotations to determine names. Artifacts with the same name receive incremented version numbers. +ZenML allows for flexible naming of artifacts in pipelines, which is crucial when reusing steps with different inputs. The naming convention can be static or dynamic, and ZenML uses type annotations to determine artifact names. Artifacts with the same name receive incremented version numbers. #### Naming Strategies -1. **Static Naming**: Defined as string literals. +1. **Static Naming**: Defined directly as string literals. ```python @step def static_single() -> Annotated[str, "static_output_name"]: return "null" ``` -2. **Dynamic Naming**: Generated at runtime using string templates. - - **Standard Placeholders**: +2. **Dynamic Naming**: + - **Using Standard Placeholders**: - `{date}`: Current date (e.g., `2024_11_18`) - `{time}`: Current time (e.g., `11_07_09_326492`) ```python @@ -18421,7 +18421,7 @@ In ZenML pipelines, managing artifact names is crucial for tracking outputs, esp return "null" ``` - - **Custom Placeholders**: Defined via the `substitutions` parameter. + - **Using Custom Placeholders**: Defined via the `substitutions` parameter. ```python @step(substitutions={"custom_placeholder": "some_substitute"}) def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: @@ -18441,9 +18441,9 @@ In ZenML pipelines, managing artifact names is crucial for tracking outputs, esp ``` **Substitution Scope**: - - Set in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. + - Can be set in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. -3. **Multiple Output Handling**: Combine naming strategies for multiple artifacts. +3. **Multiple Output Handling**: Combine naming options for multiple artifacts. ```python @step def mixed_tuple() -> Tuple[ @@ -18454,8 +18454,9 @@ In ZenML pipelines, managing artifact names is crucial for tracking outputs, esp ``` #### Caching Behavior +When caching is enabled, artifact names remain consistent across runs. -When caching is enabled, artifact names remain consistent across runs, even for dynamic names. Example: +Example: ```python @step(substitutions={"custom_placeholder": "resolution"}) def demo() -> Tuple[ @@ -18469,15 +18470,14 @@ def my_pipeline(): demo() if __name__ == "__main__": - run_without_cache = my_pipeline.with_options(enable_cache=False)() - run_with_cache = my_pipeline.with_options(enable_cache=True)() + run_without_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=False)() + run_with_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=True)() ``` -Both runs will produce consistent output artifact names, demonstrating the caching mechanism. +Both runs will yield consistent output artifact names, demonstrating the caching functionality. ### Summary - -ZenML provides flexible artifact naming strategies through static and dynamic methods, leveraging placeholders and substitutions. This allows for clear tracking of artifacts across multiple pipeline runs, especially when caching is utilized. +ZenML provides a robust framework for naming artifacts in pipelines, allowing for both static and dynamic strategies, including the use of placeholders and custom substitutions. Properly managing artifact names is essential for tracking outputs effectively, especially in complex workflows. ================================================== @@ -18485,9 +18485,9 @@ ZenML provides flexible artifact naming strategies through static and dynamic me ### Summary of ZenML Step Outputs and Pipeline -In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Using type annotations for outputs enhances transparency, aids in data transfer between steps, and allows for serialization/deserialization (termed 'materialize'). +In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Utilizing type annotations for outputs enhances transparency, aids in data passing between steps, and allows for serialization/deserialization (termed 'materialize' in ZenML). -#### Key Steps and Pipeline Definition +#### Code Example ```python @step @@ -18509,11 +18509,9 @@ def simple_ml_pipeline(parameter: int): train_model(dataset) ``` -- **`load_data` Step**: Takes an integer parameter and returns a dictionary with training data and labels. -- **`train_model` Step**: Receives the output from `load_data`, computes sums of features and labels, and simulates model training. -- **`simple_ml_pipeline`**: Chains `load_data` and `train_model`, demonstrating data flow between steps. - -This structure illustrates how ZenML manages data through pipelines, ensuring efficient processing and tracking. +#### Key Points: +- **Steps**: `load_data` retrieves training data and labels; `train_model` processes this data to train a model. +- **Pipeline**: `simple_ml_pipeline` connects the two steps, passing output from `load_data` to `train_model`, illustrating data flow in ZenML. ================================================== @@ -18521,16 +18519,16 @@ This structure illustrates how ZenML manages data through pipelines, ensuring ef # ZenML Core Concepts Summary -**ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines. It facilitates collaboration among data scientists, ML engineers, and MLOps developers. The core concepts of ZenML can be categorized into three main threads: +**ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines, enabling collaboration among data scientists, ML engineers, and MLOps developers. The framework is structured around three main threads: -1. **Development**: Focuses on designing machine learning workflows. -2. **Execution**: Involves utilizing MLOps tooling and infrastructure during workflow execution. -3. **Management**: Pertains to establishing and maintaining production-grade solutions. +1. **Development**: Focuses on designing ML workflows. +2. **Execution**: Utilizes MLOps tooling and infrastructure during workflow execution. +3. **Management**: Establishes and maintains efficient, production-grade solutions. ## 1. Development ### Steps -- Functions decorated with `@step`. +- Functions marked with the `@step` decorator. - Example: ```python @step @@ -18539,7 +18537,7 @@ This structure illustrates how ZenML manages data through pipelines, ensuring ef ``` ### Pipelines -- A pipeline consists of a series of steps, defined using decorators or classes. +- Composed of steps, defined using decorators or classes. - Example: ```python @pipeline @@ -18549,61 +18547,58 @@ This structure illustrates how ZenML manages data through pipelines, ensuring ef ``` ### Artifacts -- Represent data passing through steps, automatically tracked and stored by ZenML. +- Represent data inputs and outputs, tracked by ZenML in an artifact store. +- Produced by steps and stored after execution. ### Models -- Represent outputs of training processes and associated metadata. +- Represent outputs of training processes, including weights and metadata. ### Materializers -- Define serialization/deserialization of artifacts. Custom materializers can be created if needed. +- Define serialization/deserialization of artifacts using the `BaseMaterializer` class. ### Parameters & Settings -- Steps can take parameters, which are stored by ZenML for reproducibility. +- Steps accept parameters, which are stored by ZenML for reproducibility. ### Model Versions -- A model can have multiple versions, linking all entities to a centralized view. +- A model can have multiple versions, linking all entities for centralized management. ## 2. Execution ### Stacks & Components -- A **Stack** is a collection of components (e.g., orchestrators, artifact stores) for executing pipelines. - -### Orchestrator -- Coordinates the execution of steps in a pipeline. ZenML provides a local orchestrator for experimentation. - -### Artifact Store -- Houses all data passing through the pipeline, enabling features like data caching. +- **Stacks**: Collections of components (orchestrators, artifact stores) for pipeline execution. +- **Orchestrator**: Coordinates step execution in a pipeline. +- **Artifact Store**: Houses and tracks data passing through the pipeline. ### Flavor -- Base abstractions for stack components that can be customized for specific use cases. +- Base abstractions for stack components, allowing users to create custom solutions. ### Stack Switching -- Easily switch between local and remote stacks with a single CLI command. +- Easily switch between local and cloud stacks using a CLI command. ## 3. Management ### ZenML Server -- Required for using remote stack components and managing ZenML entities (pipelines, models, etc.). +- Required for remote stack components and managing ZenML entities (pipelines, models). ### Server Deployment - Options include ZenML Pro SaaS or self-hosted deployment. ### Metadata Tracking -- The ZenML Server tracks metadata around pipeline runs, aiding in troubleshooting. +- ZenML Server tracks metadata for pipeline runs, aiding in troubleshooting. ### Secrets Management -- Centralized secrets store for sensitive data, configurable with various backends (e.g., AWS Secrets Manager). +- Centralized store for sensitive data, configurable with various backends (AWS, GCP, Azure). ### Collaboration -- Facilitates teamwork among diverse roles in MLOps, allowing sharing of pipelines and resources. +- Facilitates teamwork among diverse roles in MLOps through shared resources. ### Dashboard -- Visual interface to manage pipelines, stacks, and components, enhancing collaboration. +- Visual interface for managing pipelines, stacks, and components, enhancing collaboration. ### VS Code Extension -- Allows interaction with ZenML stacks and runs directly from the VS Code editor. +- Allows interaction with ZenML stacks and resources directly from the VS Code editor. -This summary encapsulates the essential technical information and key points of ZenML's core concepts, enabling effective understanding and interaction with the framework. +This summary encapsulates the essential concepts and functionalities of ZenML, enabling users to understand its structure and capabilities in MLOps. ================================================== @@ -18611,38 +18606,37 @@ This summary encapsulates the essential technical information and key points of # ZenML System Architecture Overview -## Deployment Options -ZenML can be deployed in various configurations: self-hosted OSS, SaaS, or self-hosted ZenML Pro. +This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their respective components. -### ZenML OSS (Self-hosted) +## ZenML OSS (Self-hosted) - **ZenML OSS Server**: A FastAPI application managing metadata for pipelines, artifacts, and stacks. -- **OSS Metadata Store**: Stores all tenant metadata, including ML tracking and versioning. +- **OSS Metadata Store**: Stores all tenant metadata, including ML tracking and versioning information. - **OSS Dashboard**: A ReactJS app displaying pipelines and runs. -- **Secrets Store**: Secure storage for credentials needed to access infrastructure services. +- **Secrets Store**: Secure storage for credentials needed to access infrastructure services. In ZenML Pro, this is enhanced with additional functionality. -ZenML OSS is free under the Apache 2.0 license. For deployment details, refer to the [deployment guide](./deploying-zenml/README.md). +ZenML OSS is available under the Apache 2.0 license. For deployment instructions, refer to the [deployment guide](./deploying-zenml/README.md). -### ZenML Pro (SaaS or Self-hosted) -- **ZenML Pro Control Plane**: Central management for all tenants. -- **Pro Dashboard**: Enhanced dashboard functionality over OSS. -- **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management. -- **Pro Add-ons**: Python modules for added features. -- **Identity Provider**: Supports flexible authentication, including integration with Auth0 for cloud deployments and custom OIDC for self-hosted setups. +## ZenML Pro (SaaS or Self-hosted) +ZenML Pro enhances OSS with additional components: +- **ZenML Pro Control Plane**: Central entity managing all tenants. +- **Pro Dashboard**: Enhanced dashboard with additional features. +- **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management data. +- **Pro Add-ons**: Python modules for extended functionality. +- **Identity Provider**: Supports flexible authentication via Auth0 for SaaS or custom OIDC for self-hosted setups. -ZenML Pro offers various hosting options, from SaaS to fully air-gapped deployments. +ZenML Pro can be deployed on various infrastructures, from SaaS to air-gapped environments. -#### ZenML Pro SaaS Architecture -- All ZenML services are hosted by ZenML, with customer secrets managed by the Pro Control Plane. -- ML metadata is stored on ZenML infrastructure, while actual ML data artifacts are stored in the customer's cloud. -- A hybrid option allows customers to store secrets on their side while connecting to the managed ZenML server. +### ZenML Pro SaaS Architecture +- All ZenML services are hosted by ZenML, with customer secrets managed by the ZenML Pro Control Plane. +- ML metadata is stored on ZenML infrastructure, while actual ML data artifacts reside on customer cloud storage. +- A hybrid option allows customers to store secrets on their side while connecting to the ZenML server. -#### ZenML Pro Self-Hosted Architecture -- All services, data, and secrets are deployed on the customer's cloud for maximum security. -- For setup inquiries, contact ZenML support. +### ZenML Pro Self-Hosted Architecture +- All services, data, and secrets are deployed on the customer's cloud, suitable for air-gapped deployments. -For further details on ZenML Pro concepts, refer to the [core concepts guide](../getting-started/zenml-pro/core-concepts.md). +For more information on core concepts for ZenML Pro, refer to the [core concepts guide](../getting-started/zenml-pro/core-concepts.md). -For a free trial of ZenML Pro, sign up [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). +Interested in ZenML Pro? [Sign up](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) for a free 14-day trial. ================================================== @@ -18651,7 +18645,7 @@ For a free trial of ZenML Pro, sign up [here](https://cloud.zenml.io/?utm_source # ZenML Installation and Getting Started ## Installation -**ZenML** is a Python package installable via `pip`: +**ZenML** is a Python package that can be installed via `pip`: ```shell pip install zenml @@ -18666,107 +18660,114 @@ To use the ZenML web dashboard locally, install the optional server dependencies pip install "zenml[server]" ``` -**Recommendation:** Use a virtual environment (e.g., `virtualenvwrapper`, `pyenv-virtualenv`). +**Recommendation:** Use a virtual environment, such as [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv). -## MacOS Installation (Apple Silicon) -Set the following environment variable for local server connections: +## MacOS with Apple Silicon (M1, M2) +Set the following environment variable to maintain server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` -This is not needed if using ZenML as a client. +This is only necessary for local server usage. ## Nightly Builds -For the latest unstable features, install the nightly build: +Install nightly builds using: ```shell pip install zenml-nightly ``` +These builds are from the latest `develop` branch and may not be stable. + ## Verifying Installation -Check the installation via Bash or Python: +Check installation success via Bash: -Bash: ```bash zenml version ``` -Python: +Or in Python: + ```python import zenml print(zenml.__version__) ``` -## Docker Usage -ZenML is available as a Docker image: +For more details, visit the [PyPi package page](https://pypi.org/project/zenml). + +## Running with Docker +ZenML is available as a Docker image. Start a bash environment with: -Start a bash environment: ```shell docker run -it zenmldocker/zenml /bin/bash ``` -Run the ZenML server: +To run the ZenML server: + ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` ## Deploying the Server -To run ZenML locally with the dashboard: +You can run ZenML locally with: ```shell pip install "zenml[server]" zenml login --local # opens the dashboard locally ``` -For advanced features, consider deploying a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or signing up for a free [ZenML Pro](https://cloud.zenml.io/signup) account. +For advanced features, deploy a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or signing up for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. ================================================== === File: docs/book/getting-started/zenml-pro/teams.md === -### ZenML Pro Teams Overview +### Summary of ZenML Pro Teams Documentation -**Description**: Learn how to manage user groups in ZenML Pro through the concept of Teams, which helps streamline user management and access control in MLOps workflows. +**Overview:** +ZenML Pro introduces "Teams" to manage user groups within organizations and tenants, enhancing user management and access control in MLOps workflows. -#### Key Benefits of Teams +#### Key Benefits of Teams: 1. **Group Management**: Manage permissions for multiple users simultaneously. -2. **Organizational Structure**: Reflect your company's structure or project teams. +2. **Organizational Structure**: Align teams with your company's structure. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. -#### Creating and Managing Teams +#### Creating and Managing Teams: - **Creation Steps**: 1. Navigate to Organization settings. 2. Click on the "Teams" tab. - 3. Use the "Add team" button. - + 3. Use "Add team" to create a new team. - **Required Information**: - Team name - Description (optional) - Initial team members -#### Adding Users to Teams +#### Adding Users to Teams: 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. -3. Click "Add Members". +3. Click "Add Members." 4. Choose users to add. -#### Assigning Teams to Tenants +#### Assigning Teams to Tenants: 1. Go to the tenant settings page. -2. Click on the "Members" tab, then "Teams". -3. Select "Add Team". +2. Click on the "Members" tab, then the "Teams" tab. +3. Select "Add Team." 4. Choose the team and assign a role. -#### Team Roles and Permissions -- Roles assigned to teams within a tenant are inherited by all team members. Roles can be predefined (Admin, Editor, Viewer) or custom. +#### Team Roles and Permissions: +- Roles assigned to a team within a tenant grant all members the associated permissions (e.g., Admin, Editor, Viewer). +- Example: Assigning "Editor" role to a team grants all members Editor permissions in that tenant. -#### Best Practices -1. **Reflect Your Organization**: Create teams that mirror your structure. -2. **Combine with Custom Roles**: Use custom roles for detailed access control. -3. **Regular Audits**: Review team memberships and roles periodically. +#### Best Practices: +1. **Reflect Your Organization**: Create teams that mirror your company's structure. +2. **Combine with Custom Roles**: Utilize custom roles for detailed access control. +3. **Regular Audits**: Periodically review team memberships and roles. 4. **Document Team Purposes**: Keep clear documentation on each team's purpose and projects. -By utilizing Teams in ZenML Pro, you can enhance user management, simplify access control, and improve organization in MLOps workflows. +By utilizing Teams in ZenML Pro, organizations can enhance user management, streamline access control, and better organize MLOps workflows. + +For the latest documentation, please refer to [ZenML Documentation](https://docs.zenml.io). ================================================== @@ -18774,77 +18775,108 @@ By utilizing Teams in ZenML Pro, you can enhance user management, simplify acces # Organizations in ZenML Pro -In ZenML Pro, an **Organization** is the highest-level structure within the ZenML Cloud environment, encompassing a group of users and one or more [tenants](./tenants.md). +In ZenML Pro, an **Organization** is the primary structure within the ZenML Cloud environment, encompassing a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members -To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Users can use their login across all accessible tenants. +To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Once part of an organization, users can access all tenants they are authorized for. ## Managing Organization Settings -Organization settings, including billing information and member roles, can be managed by clicking your profile picture in the top right corner and selecting "Settings". +Organization-level settings include billing information and member roles. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations -Various operations related to Organizations can be performed via the API. More information is available at [ZenML Cloud API](https://cloudapi.zenml.io/). +Additional operations related to Organizations can be performed via the API. For more details, visit [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === -### ZenML Pro Self-Hosted Deployment Guide +# ZenML Pro Self-Hosted Deployment Guide -This guide outlines the steps to install ZenML Pro, including the Control Plane and Tenant servers, in a Kubernetes cluster. +This document outlines the installation of ZenML Pro, including the Control Plane and Tenant servers, in a Kubernetes cluster. -#### Overview -ZenML Pro requires access to private container images and a self-provided infrastructure, including a Kubernetes cluster, a database server, and prerequisites for exposing services via HTTPS (load balancer, Ingress controller, SSL certificates, and DNS rules). Notably, Single Sign-On (SSO) and Run Templates are not available in the on-prem version. +## Overview +ZenML Pro requires access to private container images and infrastructure components: a Kubernetes cluster, a database server, load balancer, Ingress controller, HTTPS certificates, and DNS rules. Note that Single Sign-On (SSO) and Run Templates features are not available in the on-prem version. -#### Prerequisites -1. **Software Artifacts**: Access to ZenML Pro container images and Helm charts is necessary. Contact ZenML for access. - - **Control Plane Artifacts**: - - API: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` - - Dashboard: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` - - Helm Chart: `oci://public.ecr.aws/zenml/zenml-pro` - - **Tenant Server Artifacts**: - - Server: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` - - OSS Helm Chart: `oci://public.ecr.aws/zenml/zenml` - - **Client Artifacts**: Public client image at `zenmldocker/zenml` on Docker Hub. - -2. **Accessing Container Images**: Currently, ZenML Pro images are available only in AWS ECR. For access: - - Create an AWS IAM user/role with `AmazonEC2ContainerRegistryReadOnly` permissions. - - Authenticate Docker with ECR using the AWS CLI. - -3. **Air-Gapped Installation**: For environments without internet access, download artifacts using a machine with internet, save them, and transfer to the air-gapped environment. - -#### Infrastructure Requirements -1. **Kubernetes Cluster**: A functional cluster is required. -2. **Database Server**: Connect to an external MySQL or Postgres database (Postgres for Control Plane, MySQL for Tenant servers). -3. **Ingress Controller**: Install and configure an Ingress provider (e.g., NGINX). -4. **Domain Name**: Obtain an FQDN for the Control Plane and tenants. -5. **SSL Certificate**: Generate and configure SSL certificates for secure connections. - -#### Installation Steps -1. **Configure Helm Chart**: Customize the `values.yaml` file for your deployment. - - Key configurations include database credentials, server URL, and Ingress settings. - -2. **Install Control Plane**: - ```bash - helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml - ``` +## Preparation and Prerequisites -3. **Onboard Additional Users**: Use the provided Python script to create user accounts and manage access. +### Software Artifacts +- **Control Plane Artifacts**: + - API: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` + - Dashboard: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` + - Helm Chart: `oci://public.ecr.aws/zenml/zenml-pro` -4. **Enroll and Deploy Tenants**: - - Use the `enroll-tenant.py` script to create a tenant entry and generate a Helm `values.yaml` file. - - Deploy the tenant server using Helm: - ```bash - helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values .yaml - ``` +- **Tenant Server Artifacts**: + - Server: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` + - OSS Helm Chart: `oci://public.ecr.aws/zenml/zenml` + +- **Client Artifacts**: + - Client Image: `zenmldocker/zenml` (Docker Hub) + +### Accessing ZenML Pro Container Images +Currently, images are available only in AWS ECR. Temporary credentials can be requested from ZenML support. -#### Accessing the Deployment -After installation, access the ZenML Pro dashboard using the provided credentials. Ensure to follow the onboarding steps for new users and manage tenant access accordingly. +#### AWS Access Steps: +1. **Create AWS Account**: Follow the AWS Free Tier page instructions. +2. **Create IAM User/Role**: Grant `AmazonEC2ContainerRegistryReadOnly` permissions. +3. **Authenticate Docker Client**: Use AWS CLI to log in to ECR. -This guide provides a comprehensive overview of the installation and configuration process for ZenML Pro in a self-hosted environment, ensuring all critical details are retained for successful deployment and management. +### Air-Gapped Installation +For environments without internet access: +1. **Prepare an Internet-Connected Machine**: Download required artifacts. +2. **Transfer Artifacts**: Use a USB drive or secure transfer method. +3. **Load Artifacts**: Use Docker to load images and push to an internal registry. +4. **Update Configuration**: Modify Helm values to point to the internal registry. + +### Infrastructure Requirements +1. **Kubernetes Cluster**: A functional cluster is necessary. +2. **Database Server**: MySQL for Tenant servers; either MySQL or Postgres for Control Plane. +3. **Ingress Controller**: Required for HTTP(S) traffic routing. +4. **Domain Name**: An FQDN for the Control Plane and tenants. +5. **SSL Certificate**: Configure SSL termination for Ingress. + +## Stage 1: Install ZenML Pro Control Plane + +### Configure Helm Chart +Customize the Helm chart using `values.yaml` for settings like database credentials and server URL. + +### Install the Helm Chart +Run the following command to install ZenML Pro: +```bash +helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml +``` + +### Verify Installation +Check the status of the deployment: +```bash +kubectl -n zenml-pro get all +``` + +### Onboard Additional Users +1. Retrieve the admin password: +```bash +kubectl get secret --namespace zenml-pro zenml-pro -o jsonpath="{.data.ZENML_CLOUD_ADMIN_PASSWORD}" | base64 --decode; echo +``` +2. Create a `users.yml` file with user details. +3. Use the `create_users.py` script to onboard users. + +## Stage 2: Enroll and Deploy ZenML Pro Tenants + +### Enroll a Tenant +Run the `enroll-tenant.py` script to create a tenant entry and generate a Helm `values.yaml` file. + +### Deploy the ZenML Pro Tenant Server +Use the generated YAML file to deploy the tenant: +```bash +helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values +``` + +### Accessing the Tenant +Log in as an organization member and follow the checklist to unlock the full dashboard. + +This guide provides a comprehensive overview of deploying ZenML Pro in a self-hosted environment, ensuring all necessary steps and configurations are covered for a successful installation. ================================================== @@ -18852,161 +18884,170 @@ This guide provides a comprehensive overview of the installation and configurati # ZenML Pro Core Concepts -ZenML Pro introduces a distinct entity hierarchy compared to the open-source version. Key components include: +ZenML Pro features a distinct entity hierarchy compared to the open-source version. Below are the key components: - **Organization**: A collection of users, teams, and tenants. - **Tenant**: An isolated ZenML server deployment containing all project resources. - **Teams**: Groups of users within an organization for resource management. - **Users**: Individual accounts on a ZenML Pro instance. -- **Roles**: Control user actions within a tenant or organization. +- **Roles**: Define user permissions within a tenant or organization. - **Templates**: Configurable pipeline runs that can be re-executed. -For more details, refer to the following resources: +For detailed information, refer to the linked documents: -| **Concept** | **Description** | **Link** | -|-------------------|------------------------------------------------|------------------------| -| Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | -| Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | -| Teams | Team management in ZenML Pro | [teams.md](./teams.md) | -| Roles & Permissions| Role-based access control in ZenML Pro | [roles.md](./roles.md) | +| Concept | Description | Link | +|------------------|-----------------------------------------------|---------------------| +| Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | +| Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | +| Teams | Team management in ZenML Pro | [teams.md](./teams.md) | +| Roles & Permissions | Role-based access control in ZenML Pro | [roles.md](./roles.md) | ================================================== === File: docs/book/getting-started/zenml-pro/pro-api.md === -### ZenML Pro API Overview +# ZenML Pro API Overview -The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources for both SaaS and self-hosted instances. The SaaS version is accessible at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). +ZenML Pro provides a RESTful API for managing resources, applicable to both SaaS and self-hosted instances. The API adheres to OpenAPI 3.1.0 specifications and is accessible at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). -#### Key Features -- **Tenant Management**: Create, list, get details, and update tenants. -- **Organization Management**: Manage organizations similarly. -- **User Management**: List users, get current user info, and update user details. +## Key Features +- **Tenant Management**: Create, list, get, and update tenants. +- **Organization Management**: Manage organizations with similar operations. +- **User Management**: List users, get current user details, and update user information. - **Role-Based Access Control (RBAC)**: Create roles, assign roles, and check permissions. -- **Authentication**: Requires user login for request authentication. Programmatic access is currently unavailable. +- **Authentication**: Requires user login via the ZenML Pro interface. Programmatic access is currently unavailable. -#### Important API Endpoints -- **Tenant Management**: - - `GET /tenants`: List tenants - - `POST /tenants`: Create a tenant - - `GET /tenants/{tenant_id}`: Get tenant details - - `PATCH /tenants/{tenant_id}`: Update a tenant +## Important Endpoints +### Tenant Management +- `GET /tenants`: List tenants +- `POST /tenants`: Create a tenant +- `GET /tenants/{tenant_id}`: Get tenant details +- `PATCH /tenants/{tenant_id}`: Update a tenant -- **Organization Management**: - - `GET /organizations`: List organizations - - `POST /organizations`: Create an organization - - `GET /organizations/{organization_id}`: Get organization details - - `PATCH /organizations/{organization_id}`: Update an organization +### Organization Management +- `GET /organizations`: List organizations +- `POST /organizations`: Create an organization +- `GET /organizations/{organization_id}`: Get organization details +- `PATCH /organizations/{organization_id}`: Update an organization -- **User Management**: - - `GET /users`: List users - - `GET /users/me`: Get current user - - `PATCH /users/{user_id}`: Update user +### User Management +- `GET /users`: List users +- `GET /users/me`: Get current user +- `PATCH /users/{user_id}`: Update user -- **Role-Based Access Control**: - - `POST /roles`: Create a role - - `POST /roles/{role_id}/assignments`: Assign a role - - `GET /permissions`: Check permissions +### Role-Based Access Control +- `POST /roles`: Create a role +- `POST /roles/{role_id}/assignments`: Assign a role +- `GET /permissions`: Check permissions -#### Error Handling -Standard HTTP status codes indicate request success or failure. Error responses include a message and additional details. +## Error Handling +Standard HTTP status codes are used to indicate request outcomes. Error responses include messages and additional details. -#### Rate Limiting -The API may enforce rate limits. Exceeding these limits results in a 429 status code. Implement backoff and retry logic accordingly. +## Rate Limiting +The API may enforce rate limits, returning a 429 status code for excessive requests. Implement backoff and retry logic in applications. -For comprehensive details on endpoints, request/response schemas, and features, refer to the full documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). +For comprehensive details, refer to the full API documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/roles.md === -### ZenML Pro: Roles and Permissions Overview +# ZenML Pro: Roles and Permissions Summary -ZenML Pro utilizes a role-based access control (RBAC) system to manage permissions for users and teams. This guide outlines the available roles, assignment procedures, and custom role creation. +ZenML Pro employs a role-based access control (RBAC) system to manage permissions for users and teams. This guide outlines the available roles, assignment procedures, and custom role creation. -#### Organization-Level Roles -1. **Org Admin**: Full control; can manage members, tenants, billing, and roles. -2. **Org Editor**: Manages tenants and teams; no access to subscription info or deletion rights. +## Organization-Level Roles +Three predefined roles exist at the organization level: + +1. **Org Admin**: Full control; can manage members, tenants, billing, and assign roles. +2. **Org Editor**: Can manage tenants and teams but lacks access to subscription info and cannot delete the organization. 3. **Org Viewer**: Read-only access to tenants. -**Assignment Steps**: -- Go to Organization settings > Members tab. -- Update roles or add new members. +### Assigning Organization Roles +- Navigate to Organization settings > Members tab. +- Update roles or use "Add members" to include new users. **Notes**: -- Admins can assign themselves any tenant role. -- Editors and viewers cannot access tenants they are not part of. -- Custom organization roles can be created via the ZenML Pro API. +- Admins can add themselves to any tenant role. +- Editors and viewers cannot add themselves to tenants they don’t belong to. +- Custom organization roles can be created via the [ZenML Pro API](https://cloudapi.zenml.io/). + +## Tenant-Level Roles +Tenant roles dictate user permissions within a specific tenant. Predefined roles include: -#### Tenant-Level Roles -Roles dictate permissions within a specific tenant. Predefined roles include: 1. **Admin**: Full control over tenant resources. -2. **Editor**: Can create and share resources; cannot modify or delete. +2. **Editor**: Can create and share resources but cannot modify or delete them. 3. **Viewer**: Read-only access. -**Custom Roles Creation**: -1. Access tenant settings > Roles. -2. Click "Add Custom Role". -3. Define name, description, and base role. -4. Edit permissions for various resources (e.g., Artifacts, Models). +### Custom Roles +To create a custom role: +1. Access tenant settings > Roles > Add Custom Role. +2. Name the role, choose a base role, and adjust permissions. -**Managing Role Permissions**: -- Go to tenant settings > Roles. -- Select a role and click "Edit Permissions" to adjust. +**Resources for Permissions**: +- Artifacts, Models, Pipelines, etc. +- Permissions: Create, Read, Update, Delete, Share. -#### Sharing Resources -Users can share individual resources directly through the dashboard. +### Managing Role Permissions +1. Go to Roles in tenant settings. +2. Select the role and click "Edit Permissions" to adjust. -#### Best Practices +## Sharing Individual Resources +Users can share specific resources through the dashboard. + +## Best Practices 1. **Least Privilege**: Assign minimal necessary permissions. 2. **Regular Audits**: Review role assignments periodically. -3. **Custom Roles**: Tailor roles for specific team needs. -4. **Documentation**: Keep records of custom roles and their purposes. +3. **Use Custom Roles**: Tailor roles for specific team needs. +4. **Document Roles**: Keep records of custom roles and their purposes. -By implementing ZenML Pro's RBAC, teams can ensure appropriate access levels, enhancing security and collaboration in MLOps projects. +By utilizing ZenML Pro's RBAC, teams can maintain security while facilitating collaboration in MLOps projects. ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === -### ZenML Pro Tenants Overview +# ZenML Pro Tenants Documentation Summary -**Definition**: Tenants in ZenML Pro are isolated deployments of the ZenML server, each with its own users, roles, and resources. All operations, such as pipelines, stacks, and runs, are scoped to a tenant. +## Overview +Tenants in ZenML Pro are isolated deployments of the ZenML server, each with its own users, roles, and resources. All operations in ZenML Pro, including pipelines, stacks, and runs, are scoped to a tenant. -**Creating a Tenant**: +## Creating a Tenant +To create a tenant: 1. Navigate to your organization page. -2. Click "+ New Tenant". -3. Name your tenant and click "Create Tenant". +2. Click "+ New Tenant." +3. Provide a tenant name and click "Create Tenant." -Alternatively, create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. +Alternatively, tenants can be created via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. -**Organizing Tenants**: -- **By Development Stage**: - - **Staging Tenants**: For development and testing. - - **Production Tenants**: For live services with stricter access controls and monitoring. +## Organizing Tenants +### By Development Stage +- **Staging Tenants**: For development, testing, and experimentation. +- **Production Tenants**: For live services, requiring stricter access controls and monitoring. -- **By Business Logic**: - - **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System, NLP). - - **Team-based**: Align tenants with organizational teams (e.g., Data Science, ML Engineering). - - **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public, Internal, Confidential). +### By Business Logic +- **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System). +- **Team-based**: Align tenants with organizational teams (e.g., Data Science Team). +- **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public Data Tenant). -**Best Practices**: +### Best Practices 1. Use clear naming conventions. 2. Implement role-based access control. 3. Maintain documentation for each tenant. 4. Conduct regular reviews of tenant structure. -5. Ensure scalability for future growth. +5. Design for scalability. -**Using Your Tenant**: -Tenants enable running pipelines, experiments, and accessing Pro features like: +## Using Your Tenant +Tenants provide access to Pro features such as: - Model Control Plane - Artifact Control Plane -- Pipeline execution from the Dashboard +- Running pipelines from the Dashboard +- Creating templates from pipeline runs -**Accessing Tenant Documentation**: -Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for available methods, including running pipelines via the REST API. +### Accessing Tenant Documentation +Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for available methods, including pipeline execution via the REST API. -For further details, refer to the [API reference](../../reference/api-reference.md). +For further details, refer to the ZenML documentation at [zenml.io](https://zenml.io/pro). ================================================== @@ -19014,20 +19055,20 @@ For further details, refer to the [API reference](../../reference/api-reference. # ZenML Pro Overview -ZenML Pro enhances the open-source ZenML product with several advanced features: +ZenML Pro enhances the Open Source ZenML product with several key features: -- **Managed Deployment**: Deploy multiple ZenML servers (tenants). +- **Managed Deployment**: Deploy multiple ZenML servers (tenants) for production-grade operations. - **User Management**: Create organizations and teams for scalable user management. - **Role-Based Access Control**: Implement customizable roles for secure resource management. -- **Model and Artifact Control Plane**: Utilize the Model Control Plane and Artifact Control Plane for better tracking of ML assets. -- **Triggers and Run Templates**: Create and run templates via the dashboard or API for efficient pipeline management. -- **Early-Access Features**: Access pro-specific features like triggers, filters, and usage reports. +- **Model and Artifact Control**: Utilize the Model Control Plane and Artifact Control Plane for effective tracking and management of ML assets. +- **Triggers and Run Templates**: Create and run templates via the dashboard or API for quick iterations with updated configurations. +- **Early Access Features**: Access pro-specific features like triggers, filters, sorting, and usage reports. -For more information, visit the [ZenML website](https://zenml.io/pro). +For more details, visit the [ZenML website](https://zenml.io/pro). ## Deployment Scenarios -ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS option simplifies server management, allowing focus on MLOps workflows. For self-hosted deployment, refer to the [self-hosted deployment guide](./self-hosted.md). +ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS version simplifies deployment and management, allowing focus on MLOps workflows. For self-hosted options, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). ### Key Resources - [Tenants](./tenants.md) @@ -19036,15 +19077,13 @@ ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS opti - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) -For a demo or to create a free account, visit [ZenML Cloud](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). - ================================================== === File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md === -### Custom Secret Stores +### Custom Secret Stores in ZenML -The secrets store is essential for managing secrets in ZenML, handling the storage, updating, and deletion of secret values, while ZenML secret metadata is stored in an SQL database. The interface for all secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` module. Below is a summary of the key methods in the `SecretsStoreInterface`: +The secrets store in ZenML is responsible for managing secret values required by pipeline or stack components, while metadata is stored in an SQL database. The interface for all secret store back-ends is defined in `zenml.zen_stores.secrets_stores.secrets_store_interface`, which includes the following key methods: ```python class SecretsStoreInterface(ABC): @@ -19058,367 +19097,313 @@ class SecretsStoreInterface(ABC): @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: - """Get secret values for an existing secret.""" - + """Retrieve secret values for an existing secret.""" + @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Update secret values for an existing secret.""" - + @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Delete secret values for an existing secret.""" ``` -### Building a Custom Secrets Store +### Steps to Build a Custom Secrets Store -To create a custom secrets store: +1. **Create a Class**: Inherit from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from `SecretsStoreInterface`. Set `SecretsStoreType.CUSTOM` as the `TYPE`. -1. **Inherit from Base Class**: Create a class that inherits from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from the interface. Set `SecretsStoreType.CUSTOM` as the `TYPE`. - 2. **Configuration Class**: If configuration is needed, inherit from `SecretsStoreConfiguration` and define your parameters. Use this as the `CONFIG_TYPE`. -3. **Server Configuration**: Ensure your code is included in the ZenML server's container image. Configure the ZenML server to use your custom secrets store via environment variables or helm chart values, as detailed in the deployment guide. +3. **Configure ZenML Server**: Ensure your code is included in the ZenML server's container image. Use environment variables or helm chart values to configure the server to utilize your custom secrets store, as detailed in the deployment guide. -For complete documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). +For further details and the complete interface definition, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-docker.md === -### Summary: Deploying ZenML in a Docker Container +### Summary of ZenML Docker Deployment Documentation -**Overview**: The ZenML server can be deployed using the Docker container image `zenmldocker/zenml-server`. This guide outlines configuration options and deployment methods, including local testing and advanced configurations. +**Overview**: This documentation provides guidance on deploying the ZenML server in a Docker container, including configuration options, local testing, and advanced deployment scenarios. -#### Local Deployment -For a quick local deployment, ensure Docker is running and execute: -```bash -zenml login --local --docker -``` -This command sets up a ZenML server using a local SQLite database. +#### Quick Start +- **Local Deployment**: Use the ZenML CLI to quickly deploy the server locally with Docker: + ```bash + zenml login --local --docker + ``` #### Configuration Options -When deploying a custom ZenML server, configure environment variables for settings like database connections and user details. Key environment variables include: - -- **ZENML_STORE_URL**: Database URL (SQLite or MySQL). - - SQLite: `sqlite:////path/to/zenml.db` - - MySQL: `mysql://username:password@host:port/database` - -- **ZENML_STORE_SSL_CA, ZENML_STORE_SSL_CERT, ZENML_STORE_SSL_KEY**: SSL configurations for MySQL connections. - -- **ZENML_LOGGING_VERBOSITY**: Controls log verbosity (e.g., `INFO`, `DEBUG`). - -- **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enables rate limiting for the API. - -If no `ZENML_STORE_*` variables are set, an SQLite database is created at `/zenml/.zenconfig/local_stores/default_zen_store/zenml.db`. - -#### Secret Store Configuration -The default secret store is the SQL database. For external secret management (AWS, GCP, Azure, HashiCorp), set the following: - -- **ZENML_SECRETS_STORE_TYPE**: Type of secret store (e.g., `sql`, `aws`, `gcp`, `azure`, `hashicorp`, `custom`). -- **ZENML_SECRETS_STORE_ENCRYPTION_KEY**: Key for encrypting secrets (recommended length: 32 characters). +- **Environment Variables**: Customize the ZenML server using environment variables: + - **Database Connection**: + - `ZENML_STORE_URL`: Points to SQLite or MySQL database. + - SQLite: `sqlite:////path/to/zenml.db` + - MySQL: `mysql://username:password@host:port/database` + - **SSL Options** (for MySQL): + - `ZENML_STORE_SSL_CA`, `ZENML_STORE_SSL_CERT`, `ZENML_STORE_SSL_KEY`, `ZENML_STORE_SSL_VERIFY_SERVER_CERT` + - **Logging**: Control verbosity with `ZENML_LOGGING_VERBOSITY`. + - **Backup Strategy**: Configure with `ZENML_STORE_BACKUP_STRATEGY` (default: `in-memory`). + - **Rate Limiting**: Enable with `ZENML_SERVER_RATE_LIMIT_ENABLED` and configure limits with `ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE` and `ZENML_SERVER_LOGIN_RATE_LIMIT_DAY`. + +#### Secrets Management +- **Default Secrets Store**: Uses SQL database by default. Configure encryption with: + - `ZENML_SECRETS_STORE_TYPE`: Set to `sql`. + - `ZENML_SECRETS_STORE_ENCRYPTION_KEY`: A secure key for encrypting secrets. +- **External Secrets Stores**: Configure for AWS, GCP, Azure, HashiCorp Vault, or custom implementations using respective environment variables. #### Running the ZenML Server -To run the ZenML server with Docker: -```bash -docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server -``` -Access the dashboard at `http://localhost:8080`. - -#### Persisting Data -To persist the SQLite database: -```bash -mkdir zenml-server -docker run -it -d -p 8080:8080 --name zenml \ - --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ - zenmldocker/zenml-server -``` - -For MySQL, run: -```bash -docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 -``` -Connect ZenML to MySQL: -```bash -docker run -it -d -p 8080:8080 --name zenml \ - --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ - zenmldocker/zenml-server -``` +- **Basic Command**: + ```bash + docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server + ``` +- **Persistent Database**: Use a mounted volume to persist the SQLite database: + ```bash + docker run -it -d -p 8080:8080 --name zenml \ + --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ + zenmldocker/zenml-server + ``` +- **MySQL Database**: Run a MySQL container and connect ZenML: + ```bash + docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 + docker run -it -d -p 8080:8080 --name zenml \ + --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ + zenmldocker/zenml-server + ``` -#### Using Docker Compose -Create a `docker-compose.yml`: -```yaml -version: "3.9" -services: - mysql: - image: mysql:8.0 - environment: - - MYSQL_ROOT_PASSWORD=password - zenml: - image: zenmldocker/zenml-server - environment: - - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml -``` -Run: -```bash -docker compose up -d -``` +#### Docker Compose +- **Example `docker-compose.yml`**: + ```yaml + version: "3.9" + services: + mysql: + image: mysql:8.0 + environment: + - MYSQL_ROOT_PASSWORD=password + zenml: + image: zenmldocker/zenml-server + environment: + - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml + ``` +- **Start with**: + ```bash + docker compose -p zenml up -d + ``` #### Backup and Recovery -Automated backups occur before database migrations. Configure backup strategies with `ZENML_STORE_BACKUP_STRATEGY` (e.g., `in-memory`, `database`, `dump-file`). +- Automated backups are enabled by default. Configure backup strategy with `ZENML_STORE_BACKUP_STRATEGY` (options: `disabled`, `in-memory`, `database`, `dump-file`). #### Troubleshooting -Check logs with: -- For CLI deployments: `zenml logs -f` -- For manual Docker deployments: `docker logs zenml -f` -- For Docker Compose: `docker compose logs -f` +- Check logs using: + ```bash + docker logs zenml -f + ``` + or for Docker Compose: + ```bash + docker compose -p zenml logs -f + ``` -This guide provides essential commands and configurations for deploying and managing a ZenML server using Docker. +This concise summary captures the essential details for deploying and configuring the ZenML server in Docker, including environment variables, secrets management, and backup strategies. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === -### Deploying ZenML on HuggingFace Spaces +### Summary: Deploying ZenML to Hugging Face Spaces -HuggingFace Spaces allows for quick deployment of ZenML, facilitating ML project hosting without infrastructure overhead. For production use, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to prevent data loss. +**Overview**: ZenML can be quickly deployed on Hugging Face Spaces, a platform for hosting ML projects, allowing users to start with minimal infrastructure. -#### Deployment Steps -1. **Create a Space**: Click [here](https://huggingface.co/new-space?template=zenml/zenml) to set up your ZenML app. Specify: +**Important Notes**: +- For production use, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to avoid data loss. +- Ensure Space visibility is set to 'Public' for local machine connections. + +**Deployment Steps**: +1. Create a ZenML Space via the [Hugging Face link](https://huggingface.co/new-space?template=zenml/zenml). +2. Specify: - Owner (personal account or organization) - Space name - - Visibility (set to 'Public' for local connections) - -2. **Select Machine**: Choose a higher-tier machine to avoid auto-shutdowns. Consider setting up a MySQL database for persistent storage. - -3. **Customize Appearance**: Modify the `README.md` file in "Files and Versions" to personalize your Space. + - Visibility (must be 'Public' for local access) +3. Optionally select a higher-tier machine to avoid auto-shutdowns. -4. **Monitor Status**: After creation, watch for 'Building' to switch to 'Running'. Refresh if the ZenML login UI isn't visible. +**Customization**: +- Modify the Space's appearance in the `README.md` file. +- After creation, wait for the status to switch from 'Building' to 'Running'. +- If the ZenML login UI is not visible, refresh the page. -5. **Get Direct URL**: Use the "Embed this Space" option to copy the "Direct URL" (format: `https://-.hf.space`) for initializing your ZenML server. - -#### Connecting to ZenML Server -To connect from your local machine: -```shell -zenml login '' -``` -Ensure the Space visibility is set to 'Public'. +**Connecting to ZenML Server**: +- Use the 'Direct URL' to connect: + ```shell + zenml login '' + ``` +- Access the ZenML dashboard directly via the URL. -#### Configuration Options -- **Database**: By default, ZenML uses an SQLite database. For a persistent database, modify the `Dockerfile` in your Space's root directory. -- **Secrets Management**: Use HuggingFace's 'Repository secrets' for managing secrets in your `Dockerfile`. Update your ZenML server password via the Dashboard settings for security. +**Configuration Options**: +- Default database is SQLite (non-persistent). For a persistent database, modify the `Dockerfile`. +- For secrets management, use Hugging Face's 'Repository secrets' and update the ZenML server password in the Dashboard settings. -#### Troubleshooting -Access logs by clicking "Open Logs" for insights into server issues. For additional support, contact the [Slack channel](https://zenml.io/slack/). +**Troubleshooting**: +- View logs by clicking "Open Logs" in the Space. +- For support, join the [Slack channel](https://zenml.io/slack/). -#### Upgrading ZenML Server -The default Space uses the latest ZenML version. To update: -- Select 'Factory reboot' in the 'Settings' tab (note: this wipes existing data unless using a MySQL database). -- Change the `FROM` statement in the `Dockerfile` to use an earlier version. +**Upgrading ZenML**: +- The Space auto-updates to the latest ZenML version. To manually update, use 'Factory reboot' in Settings (note: this wipes data unless using a persistent MySQL database). +- To use an earlier version, modify the `Dockerfile`'s `FROM` statement. -For more details on configuration, refer to the [HuggingFace documentation](https://huggingface.co/docs/hub/spaces-config-reference). +For detailed configuration parameters, refer to the [Hugging Face documentation](https://huggingface.co/docs/hub/spaces-config-reference) and ZenML's [advanced server configuration options](deploy-with-docker.md#advanced-server-configuration-options). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === -### Summary: Deploying ZenML in a Kubernetes Cluster with Helm +### Summary of ZenML Deployment in Kubernetes with Helm -#### Overview -ZenML can be deployed in a Kubernetes cluster using a Helm chart, available on [ArtifactHub](https://artifacthub.io/packages/helm/zenml/zenml). This document outlines prerequisites, configuration, and deployment scenarios. +**Overview**: This documentation outlines the process for deploying ZenML in a Kubernetes cluster using Helm, including prerequisites, configuration, and deployment scenarios. #### Prerequisites -- **Kubernetes Cluster** -- **MySQL-Compatible Database** (recommended, version 8.0+) -- **Kubernetes Client** (`kubectl`) -- **Helm** installed -- **Optional**: External Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager) - -#### ZenML Helm Configuration +- **Kubernetes Cluster**: Required. +- **Database**: Recommended to use a MySQL-compatible database (version 8.0 or higher) for production; defaults to SQLite if omitted. +- **Tools**: + - Kubernetes client (`kubectl`) + - Helm +- **Secrets Management**: Optional external Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager). + +#### Helm Configuration - Review the [`values.yaml`](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) file for customizable settings. -- Prepare database and secrets management information for Helm chart configuration. - -##### Database Information -For external MySQL-compatible databases: -- Hostname and port -- Username and password (create a dedicated user) -- Database name (can be created by ZenML) -- SSL certificates (if using SSL) - -##### Secrets Management Information -For external secrets management services: -- **AWS**: Region, access key ID, secret access key -- **GCP**: Project ID, service account with access -- **Azure**: Key Vault name, tenant ID, client ID, client secret -- **HashiCorp Vault**: Vault server URL, access token +- Collect necessary information for database and secrets management configuration. + +**Database Information**: +- Hostname, port, username, password, and database name. +- SSL certificates if using SSL. + +**Secrets Management Information**: +- For AWS: Region, access key ID, secret access key. +- For GCP: Project ID, service account with access. +- For Azure: Key Vault name, tenant ID, client ID, client secret. +- For HashiCorp Vault: Server URL and access token. #### Optional Cluster Services -- **Ingress Service**: Recommended for exposing HTTP services (e.g., `nginx-ingress`). +- **Ingress Service**: Recommended for exposing HTTP services; use `nginx-ingress`. - **Cert-Manager**: For managing TLS certificates. -#### ZenML Helm Installation - -##### Configure the Helm Chart -1. Pull the Helm chart: +#### Helm Installation +1. **Pull the Helm Chart**: ```bash helm pull oci://public.ecr.aws/zenml/zenml --version --untar ``` -2. Create a `custom-values.yaml` from `values.yaml` and modify necessary configurations (e.g., database URL, Ingress settings). - -##### Install the Helm Chart -Run the following command: -```bash -helm -n install zenml-server . --create-namespace --values custom-values.yaml -``` - -#### Connect to the Deployed ZenML Server -After deployment, activate the ZenML server via its URL. To connect your local client: -```bash -zenml login https://zenml.example.com:8080 --no-verify-ssl -``` -To disconnect: -```bash -zenml logout -``` - -#### Deployment Scenarios - -1. **Minimal Deployment** (SQLite, no Ingress): - ```yaml - zenml: - ingress: - enabled: false +2. **Customize Values**: Create `custom-values.yaml` from `values.yaml` and modify necessary configurations (e.g., database URL, TLS settings). +3. **Install the Chart**: + ```bash + helm -n install zenml-server . --create-namespace --values custom-values.yaml ``` -2. **Basic Deployment** (Local DB, Ingress with TLS): - Install `cert-manager` and `nginx-ingress`: +#### Post-Deployment +- Activate the ZenML server via the provided URL to create an admin account. +- Connect local ZenML client: ```bash - helm repo add jetstack https://charts.jetstack.io - helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true - helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace + zenml login https://zenml.example.com:8080 --no-verify-ssl ``` - - Create a `ClusterIssuer` for Let's Encrypt: +- To disconnect: ```bash - kubectl apply -f - < - privateKeySecretRef: - name: letsencrypt-staging - solvers: - - http01: - ingress: - class: nginx - EOF - ``` - - Helm values: - ```yaml - zenml: - ingress: - enabled: true - annotations: - cert-manager.io/cluster-issuer: "letsencrypt-staging" - tls: - enabled: true + zenml logout ``` -3. **Shared Ingress Controller**: Use a dedicated hostname or URL path for ZenML if the root path is in use. +#### Deployment Scenarios +- **Minimal Deployment**: Uses SQLite and ClusterIP service, accessible via port-forwarding. +- **Basic Deployment**: Uses an Ingress service with TLS certificates from cert-manager. #### Secrets Store Configuration -ZenML defaults to using the SQL database for secrets. To use external services, configure the Helm values accordingly. Ensure proper permissions for the chosen secrets management service. +- **Default**: SQL database as secrets store; configure encryption for security. +- **AWS Secrets Manager**: Requires specific IAM permissions. +- **GCP Secrets Manager**: Requires custom IAM roles for access. +- **Azure Key Vault**: Requires service principal credentials. +- **HashiCorp Vault**: Requires server URL and token. #### Backup and Recovery -ZenML automatically backs up the database before upgrades. Configure backup strategies via `zenml.database.backupStrategy`: -- `disabled` -- `in-memory` -- `database` -- `dump-file` - -Example configuration for persistent volume backup: +- Automated database backups are enabled by default before upgrades. +- Backup strategies include: + - `disabled` + - `in-memory` + - `database` + - `dump-file` (with optional persistent volume). + +**Example Configuration for Backup**: ```yaml zenml: database: url: "mysql://admin:password@my.database.org:3306/zenml" backupStrategy: dump-file backupPVStorageSize: 1Gi + podSecurityContext: fsGroup: 1000 ``` -This summary provides a concise overview of deploying ZenML using Helm in a Kubernetes environment, covering prerequisites, configuration, installation, and backup strategies. +This summary encapsulates the essential details for deploying ZenML in a Kubernetes environment using Helm, ensuring that key technical information is retained while maintaining conciseness. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md === -### Deploying ZenML with Custom Docker Images +### Summary: Deploying ZenML with Custom Docker Images -Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image. Custom images may be necessary for: +This documentation outlines the process for deploying ZenML using custom Docker images, which is necessary in specific scenarios such as implementing custom artifact stores or deploying a modified ZenML server from a fork. -- Implementing a custom artifact store for visualizations or step logs. -- Deploying a server based on a fork of the ZenML repository with modifications. - -**Note:** Custom Docker images are supported only for [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md) deployments. +#### Key Points: +- **Default Image**: The standard `zenmlhub/zenml-server` Docker image suffices for most deployments. +- **Custom Image Scenarios**: + - Enabling artifact visualizations or step logs. + - Deploying a forked version of ZenML with custom changes. -### Build and Push Custom ZenML Server Docker Image +#### Deployment Methods: +Custom Docker images can only be used with Docker or Helm deployments. -1. Set up a container registry (e.g., Docker Hub). -2. Clone ZenML and checkout the desired branch: +### Building a Custom ZenML Server Docker Image: +1. **Set Up a Container Registry**: Create a Docker Hub account or use another registry. +2. **Clone ZenML**: Check out the desired branch, e.g., for version 0.41.0: ```bash git checkout release/0.41.0 ``` -3. Copy the base Dockerfile: +3. **Copy Dockerfile**: ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` -4. Modify the Dockerfile: +4. **Modify Dockerfile**: - Add dependencies: ```bash RUN pip install ``` - - (For forks) Install local files: + - For forks, install local files: ```bash - RUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] + RUN pip install -e .[server,secrets-aws,...] ``` -5. Build and push the image: +5. **Build and Push Image**: ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` -**Tip:** To verify your custom image locally, refer to the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) section. - -### Deploy ZenML with Your Custom Image - -#### Deploy via Docker +### Deploying ZenML with Custom Image: +#### Via Docker: +- Replace `zenmlhub/zenml-server` with your custom image in the deployment steps. +- Example command to run the server: + ```bash + docker run -it -d -p 8080:8080 --name zenml /: + ``` +- Adjust `docker-compose.yml`: + ```yaml + services: + zenml: + image: /: + ``` -Follow the [ZenML Docker Deployment Guide](deploy-with-docker.md), replacing `zenmldocker/zenml-server` with your custom image reference: -```bash -docker run -it -d -p 8080:8080 --name zenml /: -``` -For `docker-compose`, modify `docker-compose.yml`: -```yaml -services: +#### Via Helm: +- Modify the `values.yaml` file: + ```yaml zenml: - image: /: -``` - -#### Deploy via Helm + image: + repository: / + tag: + ``` -Refer to the [ZenML Helm Deployment Guide](deploy-with-helm.md) and adjust the `image` section in `values.yaml`: -```yaml -zenml: - image: - repository: / - tag: -``` +For more detailed steps, refer to the ZenML Docker and Helm Deployment Guides. ================================================== @@ -19427,73 +19412,84 @@ zenml: # Deploying ZenML ## Overview -Deploying ZenML to a production environment offers benefits such as scalability, reliability, and enhanced collaboration. However, it involves complexities in infrastructure setup. +Deploying ZenML to a production environment provides benefits such as: +1. **Scalability**: Handles large workloads for faster results. +2. **Reliability**: Ensures high availability and fault tolerance. +3. **Collaboration**: Facilitates teamwork and model iteration. ## Components A ZenML deployment includes: -- **FastAPI server** with SQLite or MySQL database -- **Python Client** for server interaction -- **ReactJS dashboard** (open-source companion) -- **Optional**: ZenML Pro API, database, and dashboard +- **FastAPI Server**: Backed by SQLite or MySQL. +- **Python Client**: Interacts with the ZenML server. +- **ReactJS Dashboard**: Open-source companion for visualization. +- **(Optional)** ZenML Pro API and Dashboard. For detailed architecture, refer to the [system architecture documentation](../system-architectures.md). ### ZenML Python Client -The ZenML client is a Python package for interacting with the ZenML server, installable via `pip`. It provides: -- `zenml` CLI for managing stacks and secrets -- Framework for authoring and deploying pipelines -- Access to metadata via Python SDK for custom automations +The ZenML client is a Python package for server interaction, installable via `pip`. It provides: +- Command-line interface for managing stacks and secrets. +- Framework for authoring and deploying pipelines. +- Access to metadata through the Python SDK for custom automation. -Full documentation for the Python SDK and HTTP API is available at [SDK Docs](https://sdkdocs.zenml.io/latest/) and [API Reference](../../reference/api-reference.md). +Full documentation for the Python SDK and HTTP API is available [here](https://sdkdocs.zenml.io/latest/). ## Deployment Scenarios -Initially, ZenML runs locally with an SQLite database, limiting access to cloud-based components. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally to enable team collaboration and access to cloud components. +Initially, ZenML runs locally with an SQLite database, suitable for testing core features but lacking cloud-based components. Use `zenml login --local` to start a local server. -## Deployment Options -1. **Managed Deployment**: Utilize ZenML Pro for a managed control plane, where ZenML handles server maintenance while keeping your data secure. -2. **Self-hosted Deployment**: Deploy ZenML on your infrastructure using methods like Docker, Helm, or Hugging Face Spaces. The Pro version is also available for self-hosted setups. +For production, deploy the ZenML server centrally to enable cloud stack components and team collaboration. + +## How to Deploy ZenML +Deploying ZenML is essential for production-grade machine learning projects, allowing access to remote components and centralized tracking. + +### Deployment Options +1. **Managed Deployment**: Use ZenML Pro for managed servers (tenants), with data securely maintained. +2. **Self-hosted Deployment**: Deploy ZenML on your infrastructure using methods like Docker, Helm, or HuggingFace Spaces. ### Deployment Documentation Refer to the following guides for deployment strategies: - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) -- [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) +- [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) -Deploying ZenML enhances machine learning workflows, enabling production-level success. +This concise overview captures the essential details for deploying ZenML while omitting redundancy. ================================================== === File: docs/book/getting-started/deploying-zenml/secret-management.md === -### Secret Store Configuration and Management +# ZenML Secrets Store Configuration and Management + +## Overview +ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata is stored in the ZenML server database, while actual secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in an SQLite database; in remote deployments, they are stored in the configured secrets management back-end. -#### Centralized Secrets Store -ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in SQLite; for remote servers, they are stored in the configured secrets management back-end. Supported back-ends include: -- SQL database (default) +### Supported Secrets Store Back-Ends +ZenML can be configured to use various secrets store back-ends: +- Default SQL database - AWS Secrets Manager - GCP Secret Manager - Azure Key Vault - HashiCorp Vault -- Custom implementations +- Custom implementation -#### Configuration and Deployment -To configure the secrets store back-end, select a supported back-end and authentication method during deployment. Use the ZenML Service Connector for authentication, adhering to the principle of least privilege. The secrets store can be updated anytime by modifying the ZenML Server configuration and redeploying. Follow the documented migration strategy to minimize downtime during changes. +## Configuration and Deployment +Secrets store configuration occurs at deployment. Choose a supported back-end and authentication method, and configure the ZenML server with the necessary credentials. Use the principle of least privilege for credentials. The configuration can be updated anytime by redeploying the server, following the documented migration strategy to ensure minimal downtime. -#### Backup Secrets Store -ZenML can connect to a secondary Secrets Store for high availability, backup, and disaster recovery. Ensure the backup store is in a different location than the primary store. The server prioritizes the primary store but falls back to the backup if needed. Use the CLI commands: -- `zenml secret backup`: Backs up secrets from the primary to the backup store. -- `zenml secret restore`: Restores secrets from the backup to the primary store. +## Backup Secrets Store +ZenML can connect to a secondary Secrets Store for high availability and disaster recovery. Ensure the backup store is in a different location than the primary store to avoid issues. The server prioritizes the primary store for read/write operations, falling back to the backup if necessary. Use the CLI commands: +- `zenml secret backup`: Backs up secrets to the backup store. +- `zenml secret restore`: Restores secrets from the backup store to the primary store. -#### Secrets Migration Strategy -To change the provider or location of secrets, follow a migration strategy: -1. Configure the ZenML server to use the new store as the secondary. +## Secrets Migration Strategy +To change the secrets storage provider or location, follow this migration process: +1. Set the new store as the secondary store. 2. Redeploy the server. -3. Use `zenml secret backup` to transfer secrets from the old store to the new one. -4. Set the new store as primary and remove the old one. +3. Use `zenml secret backup` to transfer secrets from the primary to the secondary store. +4. Set the secondary store as the primary and remove the old primary. 5. Redeploy the server. -This strategy is unnecessary if only credentials or authentication methods change without altering the secrets' location. +This strategy ensures existing secrets are migrated with minimal downtime. Migration is unnecessary if only updating credentials or authentication methods without changing the storage location. For more details on deployment strategies, refer to the ZenML deployment guide.