=== File: docs/book/introduction.md === # ZenML Documentation Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It decouples infrastructure from code, enhancing collaboration among developers. ## For MLOps Platform Engineers - **ZenML Pro**: Offers a managed instance with features like CI/CD, Model Control Plane, and RBAC. - **Self-hosted Deployment**: Deploy on any cloud provider using Terraform utilities. ```bash zenml stack register --provider aws zenml stack deploy --provider gcp ``` - **Standardization**: Register environments as ZenML stacks for consistent ML workflows. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... ``` - **No Vendor Lock-In**: Easily switch between cloud providers. ```bash zenml stack set gcp python run.py # Run on GCP zenml stack set aws python run.py # Now on AWS ``` ## For Data Scientists - **Local Development**: Develop ML models locally and switch to production seamlessly. ```bash python run.py # Local development zenml stack set production python run.py # Production run ``` - **Pythonic SDK**: Use decorators to create pipelines. ```python from zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: print(f"{input_one} {input_two}") @pipeline def my_pipeline(): step_2(input_one="hello", input_two=step_1()) my_pipeline() ``` - **Automatic Metadata Tracking**: Tracks metadata and versions datasets/models automatically. ## For ML Engineers - **ML Lifecycle Management**: Manage ML workflows and environments efficiently. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` - **Reproducibility**: Automatically track and version all components. - **Automated Deployments**: Define workflows for automatic deployment to services like Seldon. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @pipeline def my_pipeline(): data = data_loader_step() model = model_trainer_step(data) seldon_model_deployer_step(model) ``` ## Additional Resources - **For MLOps Engineers**: Guides on production setup, component integration, and FAQs. - **For Data Scientists**: Core concepts, starter guides, and quickstart resources. - **For ML Engineers**: Advanced features, examples, and how-to guides. Explore more at [ZenML Live Demo](https://www.zenml.io/live-demo) and access various guides for in-depth learning. ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === ### Summary of ZenML Model Control Plane Documentation #### Overview The ZenML Model Control Plane (MCP) is a centralized system for managing ML models, which consist of multiple versions, pipelines, artifacts, and metadata. A ZenML Model encapsulates the business logic of an ML product, linking various artifacts such as training data and predictions. #### Key Concepts - **Model**: Represents a unified entity that includes pipelines, artifacts, and metadata. - **Technical Model**: The actual model files containing weights and parameters, alongside other relevant artifacts. #### Model Management Models can be managed through: - **CLI**: Use `zenml model list` to list models. - **ZenML Pro Dashboard**: Offers visualization and management capabilities. #### Configuring Models in Pipelines Models can be linked to pipelines and steps, ensuring all artifacts generated during runs are associated with the specified model. This is done using the `Model` object. **Example**: ```python from zenml import pipeline, Model model = Model(name="iris_classifier", version=None) @pipeline(model=model) def training_pipeline(gamma: float = 0.002): # Pipeline logic here ``` #### Fetching Models in Pipelines Models can be accessed via `StepContext` or `PipelineContext`. **Example**: ```python from zenml import get_step_context, step, pipeline @step def svc_trainer(...): model = get_step_context().model ``` #### Logging Metadata Metadata can be logged to models using the `log_model_metadata` method. **Example**: ```python from zenml import log_model_metadata log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` #### Retrieving Model Metadata Model metadata can be fetched using the ZenML client. **Example**: ```python from zenml.client import Client model_version = Client().get_model_version('iris_classifier') accuracy = model_version.run_metadata["accuracy"].value ``` #### Model Stages Models can exist in different stages: - **staging**: For production readiness. - **production**: Actively used in production. - **latest**: The most recent version. - **archived**: No longer relevant. **Example**: ```python model = Model(name="iris_classifier", version="staging") model.set_stage(stage="production", force=True) ``` #### CLI Commands for Model Stages - List staging models: `zenml model version list --stage staging` - Update to production: `zenml model version update -s production` #### Conclusion ZenML's Model Control Plane provides robust features for managing ML models, ensuring traceability, reproducibility, and effective metadata logging. For in-depth guidance, refer to the dedicated Model Management documentation. ================================================== === File: docs/book/user-guide/starter-guide/manage-artifacts.md === ### ZenML Artifact Management Overview ZenML provides a framework for managing and versioning data artifacts in machine learning workflows, ensuring reproducibility and traceability. This documentation covers how to effectively name, organize, and utilize artifacts produced by ZenML pipelines. #### Key Features 1. **Automatic Versioning**: Artifacts are automatically versioned upon pipeline execution, following a naming pattern of `{pipeline_name}::{step_name}::output` for unspecified outputs. 2. **Custom Naming**: Use the `Annotated` object to assign human-readable names to outputs, enhancing discoverability. ```python from typing_extensions import Annotated import pandas as pd from sklearn.datasets import load_iris from zenml import pipeline, step @step def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: iris = load_iris(as_frame=True) return iris.get("frame") @pipeline def feature_engineering_pipeline(): training_data_loader() ``` 3. **Manual Versioning**: Customize artifact versions using `ArtifactConfig` for important runs. ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: ... ``` 4. **Metadata and Tags**: Extend artifacts with metadata and tags for better organization. ```python @step def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"key": "value"}, tags=["tag"])]: return "string" ``` 5. **Comparison Tool (Pro)**: Analyze metadata across different runs using Table and Parallel Coordinates views. 6. **Artifact Types**: Specify artifact types for better filtering and visualization. ```python from zenml.enums import ArtifactType @step def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: return MyCustomModel(...) ``` 7. **External Artifacts**: Use `ExternalArtifact` to consume data from outside the pipeline. ```python from zenml import ExternalArtifact, pipeline, step @step def print_data(data: np.ndarray): print(data) @pipeline def printing_pipeline(): data = ExternalArtifact(value=np.array([0])) print_data(data=data) ``` 8. **Consuming Artifacts**: Fetch artifacts produced by other pipelines using the `Client`. ```python from zenml.client import Client client = Client() dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") ``` 9. **Linking Existing Data**: Register existing data as ZenML artifacts without moving them around. ```python from zenml import register_artifact register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` 10. **Logging Metadata**: Associate metadata with artifacts for better insights. ```python from zenml import log_artifact_metadata log_artifact_metadata(artifact_name="my_model", metadata={"accuracy": float(accuracy)}) ``` ### Example Pipeline Code Here’s a consolidated example of how to manage artifacts in ZenML: ```python from typing import Optional, Tuple from typing_extensions import Annotated import numpy as np from sklearn.base import ClassifierMixin from sklearn.datasets import load_digits from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata, save_artifact, load_artifact, Client @step def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset")]: digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline def model_finetuning_pipeline(dataset_version: Optional[str] = None, model_version: Optional[str] = None): client = Client() dataset = client.get_artifact_version(name_id_or_prefix="my_dataset", version=dataset_version) if dataset_version else versioned_data_loader_step() model = client.get_artifact_version(name_id_or_prefix="my_model", version=model_version) model_finetuner_step(model=model, dataset=dataset) def main(): untrained_model = SVC(gamma=0.001) save_artifact(untrained_model, name="my_model", version="1") model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") latest_trained_model = load_artifact("my_model") old_dataset = load_artifact("my_dataset", version="1") latest_trained_model.predict(old_dataset[0]) if __name__ == "__main__": main() ``` This example demonstrates the creation of a dataset, model training, and the management of versions and metadata in ZenML. For more details, refer to the [ZenML documentation](https://zenml.io/docs). ================================================== === File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md === ### Summary of ZenML Documentation #### Overview ZenML facilitates the creation of production-ready machine learning (ML) pipelines by decoupling stages such as data ingestion, preprocessing, and model evaluation into modular **Steps**. These Steps are integrated into an end-to-end **Pipeline**, promoting reusability, scalability, and reproducibility. #### Installation To start using ZenML, install it with: ```shell pip install "zenml[server]" zenml login --local # Launches the dashboard locally ``` #### Simple ML Pipeline Example A basic example of a ZenML pipeline is provided, demonstrating data loading and model training: ```python from zenml import pipeline, step @step def load_data() -> dict: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) if __name__ == "__main__": run = simple_ml_pipeline() ``` #### Dashboard Exploration After running the pipeline, use `zenml login --local` to view results in the ZenML Dashboard, accessible at `http://127.0.0.1:8237/`. Log in with the username **"default"**. The dashboard provides insights into execution history, artifacts, and a DAG visualization of the pipeline. #### Steps and Artifacts Each function in the pipeline is a `step`, connected by `artifacts` (output objects). ZenML automatically tracks artifacts, parameters, and configurations, enhancing reproducibility. #### Expanding to a Full ML Workflow To create a more complex workflow using the Iris dataset and a Support Vector Classifier (SVC): 1. **Imports and Requirements**: ```python from typing_extensions import Annotated, Tuple import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.svm import SVC from zenml import pipeline, step ``` Install required packages: ```bash pip install matplotlib zenml integration install sklearn -y ``` 2. **Data Loader**: ```python @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) ``` 3. **Training Step**: ```python @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) ``` 4. **Pipeline Definition**: ```python @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` #### YAML Configuration Pipelines can also be configured using a YAML file: ```python training_pipeline = training_pipeline.with_options(config_path='/local/path/to/config.yaml') training_pipeline() ``` A simple YAML configuration might look like: ```yaml parameters: gamma: 0.01 ``` #### Full Code Example The complete code for the Iris dataset pipeline is provided, encapsulating all the steps and configurations discussed. This summary captures the essential components and functionalities of ZenML for creating and managing ML pipelines, ensuring that critical information is retained for understanding and implementation. ================================================== === File: docs/book/user-guide/starter-guide/cache-previous-executions.md === ### Summary: Iterating Quickly with ZenML through Caching ZenML enhances the development of machine learning pipelines through step caching, which allows the reuse of outputs from previous runs if there are no changes in inputs, parameters, or code. Caching is enabled by default, and outputs are stored in an artifact store. #### Key Features: - **Caching Behavior**: - Steps are cached unless there are changes in inputs, parameters, or code. - Caching saves time and resources, especially when running pipelines remotely. - To disable client-side caching, set the environment variable `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. - **Manual Caching Control**: - Caching does not automatically detect file system changes or external API updates. Use `enable_cache=False` for steps that depend on such changes. #### Configuring Caching: 1. **Pipeline Level**: Set caching policy in the `@pipeline` decorator. ```python @pipeline(enable_cache=False) def first_pipeline(...): """Pipeline with cache disabled""" ``` 2. **Runtime Control**: Override caching settings at runtime using `with_options`. ```python first_pipeline = first_pipeline.with_options(enable_cache=False) ``` 3. **Step Level**: Configure caching for individual steps. ```python @step(enable_cache=False) def import_data_from_api(...): """Always run this step""" ``` #### Example Code: The following script demonstrates caching behavior in ZenML: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() logger.info("First step cached, second not due to parameter change") training_pipeline(gamma=0.0001) svc_trainer = svc_trainer.with_options(enable_cache=False) logger.info("First step cached, second not due to settings") training_pipeline() logger.info("Caching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ``` This script illustrates how caching works in ZenML, including how to disable it at various levels. ================================================== === File: docs/book/user-guide/starter-guide/starter-project.md === ### Starter Project Overview This documentation outlines the steps to initiate a simple MLOps project using ZenML, focusing on key components such as pipelines, artifacts, and models. #### Getting Started 1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 2. **Initialize Project**: Use ZenML templates to set up your project: ```bash mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults pip install -r requirements.txt ``` **Alternative Method**: If the above doesn't work, clone the MLOps starter example: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ``` #### Learning Outcomes By following the provided Jupyter notebook or README, you will execute three key pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. - **Training Pipeline**: Trains a model using the preprocessed dataset. - **Batch Inference Pipeline**: Runs predictions on new data using the trained model. #### Conclusion and Next Steps This marks the completion of the initial phase of your MLOps journey with ZenML. Experiment with ZenML to solidify your understanding, and when ready, proceed to the [production guide](../production-guide/) for advanced topics. ================================================== === File: docs/book/user-guide/starter-guide/README.md === # ZenML Starter Guide Summary The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools to manage machine learning operations effectively. ## Key Topics Covered: - **Creating Your First ML Pipeline**: Instructions for setting up an initial pipeline. - **Understanding Caching**: Techniques for caching between pipeline steps. - **Managing Data and Versioning**: Best practices for data management and version control. - **Tracking ML Models**: Methods for monitoring and tracking machine learning models. ## Prerequisites: - A Python environment. - `virtualenv` installed. By the end of the guide, users will complete a starter project, marking the beginning of their MLOps journey with ZenML. This guide serves as both an introduction to ZenML and a foundational resource for MLOps practices. ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === ### Managing the Lifecycle of a ZenML Pipeline with CI/CD #### Overview To enhance ZenML pipeline management, integrating Continuous Integration and Delivery (CI/CD) through a central workflow engine is recommended. This allows data scientists to test and validate code changes automatically before deployment to production. #### CI/CD Setup with GitHub Actions Using GitHub Actions, you can establish a CI/CD workflow for your ZenML pipelines. The [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/) serves as a template for automating continuous model training and deployment. #### Steps to Set Up CI/CD 1. **Create an API Key in ZenML**: Generate an API key for machine-to-machine connections. ```bash zenml service-account create github_action_api_key ``` Store the returned API key securely. 2. **Configure GitHub Secrets**: Store the `ZENML_API_KEY` in your GitHub repository secrets. Refer to [GitHub documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) for details. 3. **(Optional) Set Up Staging and Production Stacks**: You may choose different stacks for staging and production. This can include different data sources and configurations for models, Docker settings, and resource settings. 4. **Trigger Pipeline on Pull Request**: Set up a GitHub Action to run the pipeline when changes are made. Use the following configuration in your YAML file: ```yaml on: pull_request: branches: [staging, main] ``` Define job environment variables: ```yaml jobs: run-staging-workflow: runs-on: run-zenml-pipeline env: ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} ZENML_STACK: stack_name ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} ``` 5. **Run the Pipeline**: Include the following steps in your GitHub Action: ```yaml steps: - name: Check out repository code uses: actions/checkout@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install requirements run: pip3 install -r requirements.txt - name: Confirm ZenML client connection run: zenml status - name: Set stack run: zenml stack set ${{ env.ZENML_STACK }} - name: Run pipeline run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} ``` 6. **(Optional) Comment Metrics on the Pull Request**: Configure the workflow to leave a report based on the pipeline execution. Refer to the template in the ZenML Gitflow Repository for implementation details. This setup ensures that code changes are automatically tested and validated, streamlining the deployment process and maintaining code quality. ================================================== === File: docs/book/user-guide/production-guide/remote-storage.md === ### Summary: Transitioning to Remote Artifact Storage #### Overview Transitioning from local to remote artifact storage enhances collaboration and scalability in production environments. Remote storage allows artifacts to be stored in the cloud, making them accessible from anywhere with the right permissions. #### Provisioning Remote Artifact Stores ZenML supports various cloud providers for remote artifact storage. Below are instructions for major providers: - **AWS (S3)** 1. Install AWS CLI. 2. Install ZenML S3 integration: ```shell zenml integration install s3 -y ``` 3. Register the S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` - **GCP (GCS)** 1. Install Google Cloud CLI. 2. Install ZenML GCP integration: ```shell zenml integration install gcp -y ``` 3. Register the GCS Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` - **Azure** 1. Install Azure CLI. 2. Install ZenML Azure integration: ```shell zenml integration install azure -y ``` 3. Register the Azure Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` - **Other Providers** Use cloud-agnostic solutions like [Minio](../../component-guide/artifact-stores/artifact-stores.md) or create custom stack components. #### Configuring Permissions with Service Connectors Service connectors manage access credentials for cloud infrastructure. They broker temporary permissions for stack components. - **AWS Service Connector** ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` - **GCP Service Connector** ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` - **Azure Service Connector** ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` Attach the service connector to the artifact store: ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` #### Running a Pipeline on a Cloud Stack 1. Register a new stack: ```shell zenml stack register local_with_remote_storage -o default -a cloud_artifact_store ``` 2. Set the stack active: ```shell zenml stack set local_with_remote_storage ``` 3. Run the training pipeline: ```shell python run.py --training-pipeline ``` Artifacts will be stored in the remote location, allowing team members to access them. You can list artifact versions: ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` #### Conclusion By connecting to remote storage, you enhance collaboration and scalability in MLOps workflows, making artifacts accessible across the cloud ecosystem. ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === # Summary of Switching Infrastructure Backend in ZenML ## Understanding Stacks - A **stack** is the configuration of tools and infrastructure for running ZenML pipelines. - By default, pipelines run on the **default** stack unless specified otherwise. - ZenML separates code from configuration, allowing easy switching of environments without code changes. ### Key Commands - **Export Requirements**: `zenml stack export-requirements ` - **Describe Active Stack**: ```bash zenml stack describe ``` - **List Registered Stacks**: ```bash zenml stack list ``` ## Stack Components - A stack includes at least an **orchestrator** and an **artifact store**. - **Orchestrator**: Executes pipeline code (e.g., local Python thread). - List orchestrators: ```bash zenml orchestrator list ``` - **Artifact Store**: Persists outputs of pipeline steps. - List artifact stores: ```bash zenml artifact-store list ``` ### Additional Components - Other components include experiment trackers and model deployers. - **Container Registry**: Stores containerized images for code execution. ## Registering a Stack ### Create an Artifact Store ```bash zenml artifact-store register my_artifact_store --flavor=local ``` ### Create a New Stack ```bash zenml stack register a_new_local_stack -o default -a my_artifact_store ``` ### Inspecting the Stack ```bash zenml stack describe a_new_local_stack ``` ## Switching Stacks with VS Code Extension - Use the ZenML VS Code extension to view and switch stacks easily. ## Running a Pipeline on the New Local Stack 1. Set the new stack as active: ```bash zenml stack set a_new_local_stack ``` 2. Run the pipeline: ```bash python run.py --training-pipeline ``` ### Installation Steps for Starter Project ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults pip install -r requirements.txt ``` ### Alternative Cloning Method ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ``` This summary provides essential commands and concepts for managing stacks in ZenML, ensuring a clear understanding of how to switch infrastructure backends effectively. ================================================== === File: docs/book/user-guide/production-guide/configure-pipeline.md === ### Summary of ZenML Pipeline Configuration Documentation This documentation outlines how to enhance your ZenML pipeline configuration by adding compute resources and managing dependencies through a YAML configuration file. #### Configuring the Pipeline To configure a ZenML pipeline, you can specify a YAML file (e.g., `training_rf.yaml`) that contains necessary settings. The configuration is applied using the `with_options` method. **Example Code:** ```python pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") training_pipeline_configured = training_pipeline.with_options(**pipeline_args) training_pipeline_configured() ``` #### YAML Configuration Breakdown 1. **Docker Settings:** ```yaml settings: docker: required_integrations: - sklearn requirements: - pyarrow ``` This section specifies Docker settings, including required integrations and Python package dependencies. 2. **Model Association:** ```yaml model: name: breast_cancer_classifier version: rf license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` This section associates a ZenML model with the pipeline, providing metadata such as name, version, and description. 3. **Parameters:** ```yaml parameters: model_type: "rf" # Choose between rf/sgd ``` This section defines parameters expected by the pipeline, such as the model type. #### Scaling Compute Resources To scale compute resources for the pipeline, you can add resource specifications to your YAML configuration. **Example for GCP:** ```yaml settings: orchestrator: memory: 32 # in GB steps: model_trainer: settings: orchestrator: cpus: 8 ``` **Example for Azure (Kubernetes):** ```yaml settings: resources: memory: "32GB" steps: model_trainer: settings: resources: memory: "8GB" ``` After updating the configuration, run the pipeline using: ```python python run.py --training-pipeline ``` #### Important Notes - Not all orchestrators support `ResourceSettings` directly. - For more details on settings and GPU support, refer to the ZenML documentation on runtime configuration and GPU training. This concise configuration guide enables efficient management of pipeline resources and dependencies in ZenML. ================================================== === File: docs/book/user-guide/production-guide/deploying-zenml.md === ### Deploying ZenML **Overview**: Deploying ZenML is essential for moving from local development to production. Initially, ZenML operates with a local SQLite database for metadata storage (pipelines, models, artifacts). For production, the server must be deployed centrally to facilitate collaboration and interaction with infrastructure components. #### Deployment Options 1. **ZenML Pro Trial**: - Sign up for a managed SaaS solution with one-click deployment. - If the ZenML Python client is installed, connect to a trial instance using: ```bash zenml login --pro ``` - Additional features and a new dashboard are included. You can revert to self-hosting later. 2. **Self-hosting on Cloud Provider**: - ZenML is open source and can be self-hosted in a Kubernetes cluster. - Create a Kubernetes cluster using your cloud provider's documentation: - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) #### Connecting to Deployed ZenML To connect your local ZenML client to the ZenML Server, use the command: ```bash zenml login ``` This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. To revert to local ZenML, use: ```bash zenml logout ``` #### Further Resources - [Deploying ZenML](../../getting-started/deploying-zenml/README.md): Overview of deployment options and architecture. - [Full how-to guides](../../getting-started/deploying-zenml/README.md): Instructions for deploying ZenML on various platforms (Docker, Hugging Face Spaces, Kubernetes). ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === ### Summary of ZenML Git Integration Documentation **Overview**: Connect a Git repository to ZenML to enhance collaboration and optimize Docker builds in MLOps projects. This integration allows for reusing Docker images based on Git commit hashes, reducing build times and improving efficiency. **Pipeline Execution Flow**: 1. Trigger a pipeline run locally. 2. ZenML parses the `@pipeline` function for steps. 3. Local client requests stack info from ZenML server. 4. Client checks Git repository for existing Docker images based on commit hash. 5. Orchestrator sets up the cloud execution environment. 6. Code is downloaded from Git repository; existing Docker image is used. 7. Pipeline steps execute, storing artifacts in the cloud. 8. Run status and metadata are reported back to ZenML server. **Creating a GitHub Repository**: 1. Sign in to GitHub. 2. Click "+" and select "New repository." 3. Name the repository, set visibility, and add README/.gitignore if needed. 4. Click "Create repository." **Pushing Local Code to GitHub**: ```sh git init git add . git commit -m "Initial commit" git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` *Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` with your information.* **Linking GitHub to ZenML**: 1. Obtain a GitHub Personal Access Token (PAT) via GitHub settings under Developer settings. 2. Generate a new token with `contents` read-only access. **Install GitHub Integration and Register Repository**: ```sh zenml integration install github zenml code-repository register --type=github \ --owner= --repository= \ --token= ``` *Fill in placeholders with your details.* **Running the Training Pipeline**: ```python # First run builds the Docker image python run.py --training-pipeline # Subsequent runs skip Docker building python run.py --training-pipeline ``` For further details, refer to the ZenML Git Integration documentation. ================================================== === File: docs/book/user-guide/production-guide/end-to-end.md === ### End-to-End MLOps Project with ZenML This documentation outlines the steps to create an end-to-end MLOps project using ZenML, integrating advanced MLOps concepts. #### Key Concepts Covered: - **Deploying ZenML** - **Abstracting Infrastructure with Stacks** - **Connecting Remote Storage** - **Cloud Orchestration** - **Scaling Compute in Pipelines** - **Connecting a Git Repository** #### Getting Started 1. **Set Up a Virtual Environment**: Start with a clean environment. 2. **Install Dependencies**: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 3. **Initialize Project with ZenML Templates**: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e zenml init --template e2e_batch --template-with-defaults pip install -r requirements.txt ``` **Alternative Method**: Clone the ZenML example if the above fails: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/e2e pip install -r requirements.txt zenml init ``` #### Learning Outcomes The e2e project template demonstrates core ZenML concepts for supervised ML with batch predictions, building on the starter project. Users are encouraged to run pipelines on a remote cloud stack and a tracked git repository to reinforce learned concepts. #### Conclusion and Next Steps You now have an end-to-end MLOps project using ZenML connected to cloud infrastructure. Explore advanced topics in the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md) to further enhance your skills in writing pipelines and stacks. Good luck with your MLOps journey! ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === ### Summary of Cloud Orchestration Documentation #### Overview This documentation outlines the process of transitioning MLOps pipelines from local execution to a cloud environment, utilizing components such as an orchestrator and a container registry. #### Key Components 1. **Orchestrator**: Manages workflow and execution of pipelines. 2. **Container Registry**: Stores Docker container images. 3. **Remote Storage**: Completes the cloud stack for running pipelines. #### Cloud Stack Execution Sequence 1. User runs a pipeline locally, executing `run.py`. 2. Client retrieves stack configuration from the server. 3. Client builds and pushes a Docker image to the container registry. 4. Client creates a run in the orchestrator (e.g., SkyPilot). 5. Orchestrator pulls the image from the container registry to execute the pipeline. 6. Artifacts are stored in the artifact store (cloud storage). 7. Pipeline status and metadata are reported back to the ZenML server. #### Setting Up Cloud Components - **AWS Setup**: ```shell zenml integration install aws skypilot_aws -y AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure zenml orchestrator register cloud_orchestrator -f vm_aws zenml orchestrator connect cloud_orchestrator --connector cloud_connector zenml container-registry register cloud_container_registry -f aws --uri=.dkr.ecr..amazonaws.com zenml container-registry connect cloud_container_registry --connector cloud_connector ``` - **GCP Setup**: ```shell zenml integration install gcp skypilot_gcp -y zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= zenml orchestrator register cloud_orchestrator -f vm_gcp zenml orchestrator connect cloud_orchestrator --connect cloud_connector zenml container-registry register cloud_container_registry -f gcp --uri=gcr.io/ zenml container-registry connect cloud_container_registry --connector cloud_connector ``` - **Azure Setup**: ```shell zenml integration install azure kubernetes -y zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector zenml container-registry register cloud_container_registry -f azure --uri=.azurecr.io zenml container-registry connect cloud_container_registry --connector cloud_connector ``` #### Running a Pipeline on Cloud Stack 1. Register a new stack: ```shell zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry ``` 2. Set the stack active: ```shell zenml stack set minimal_cloud_stack ``` 3. Execute the training pipeline: ```shell python run.py --training-pipeline ``` The pipeline will build a Docker image, push it to the cloud, and execute in a VM, with logs streamed back to the user. #### Additional Resources For more details on setting up components, users can refer to the Component Guide for various artifact stores, container registries, and orchestrators integrated with ZenML. ================================================== === File: docs/book/user-guide/production-guide/README.md === # Production Guide Summary The ZenML production guide is designed for ML practitioners looking to implement MLOps in a workplace setting, building on the concepts from the Starter guide. It focuses on transitioning from local pipeline execution to running pipelines in cloud production environments. ## Key Topics Covered: - **Deploying ZenML**: Instructions for deploying ZenML in a production environment. - **Understanding Stacks**: Overview of the stack architecture used in ZenML. - **Connecting Remote Storage**: Guidance on integrating remote storage solutions. - **Orchestrating on the Cloud**: Techniques for managing cloud-based orchestration. - **Configuring the Pipeline for Scalability**: Tips for scaling compute resources in pipelines. - **Connecting a Code Repository**: Steps to configure a code repository for version control. ## Prerequisites: - A Python environment with `virtualenv` installed. - Access to a major cloud provider (AWS, GCP, Azure) with the respective CLIs installed and authorized. By following this guide, users will complete an end-to-end MLOps project, serving as a model for future implementations. ================================================== === File: docs/book/user-guide/llmops-guide/README.md === # ZenML LLMOps Guide Summary The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows. It targets ML practitioners and MLOps engineers aiming to utilize LLMs while ensuring robust and scalable pipelines. ## Key Topics Covered: - **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG). - **Data Handling**: - Data ingestion and preprocessing. - Generating and storing embeddings in a vector database. - **Inference Pipeline**: Basic RAG inference setup. - **Evaluation**: - Metrics for retrieval and generation. - Practical evaluation methods. - **Reranking**: - Understanding and implementing reranking for improved retrieval. - Evaluating reranking performance. - **Finetuning**: - Techniques for finetuning embeddings and LLMs. - Synthetic data generation and evaluation of finetuned models. - Deployment of finetuned models. ## Implementation Guidance: - The guide includes practical examples, starting from a simple RAG pipeline to complex setups involving finetuning and reranking. - A specific use case is presented: a question-answering system for ZenML. ## Prerequisites: - A Python environment with ZenML installed. - Familiarity with concepts from the Starter and Production Guides. By following this guide, users will learn to effectively leverage LLMs in their MLOps workflows, enabling the creation of scalable and maintainable applications. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings-with-sentence-transformers.md === ### Summary: Finetuning Embeddings with Sentence Transformers This documentation outlines the process for finetuning embeddings using the Sentence Transformers library. The pipeline involves loading a dataset, finetuning the model, evaluating the embeddings, and visualizing the results. #### Key Steps in the Pipeline: 1. **Data Loading**: - Load data from Hugging Face or Argilla (using `--argilla` flag). ```bash python run.py --embeddings --argilla ``` 2. **Finetuning Process**: - **Model Loading**: Load the base model using Sentence Transformers. - **Loss Function**: Use `MatryoshkaLoss`, which wraps `MultipleNegativesRankingLoss`, allowing training with multiple embedding dimensions. - **Dataset Preparation**: Load training data from a specified dataset path. - **Evaluator**: Create an evaluator to assess model performance. - **Training Arguments**: Set hyperparameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. - **Trainer Initialization**: Use `SentenceTransformerTrainer` with the model, arguments, dataset, and loss function. - **Training Execution**: Call `trainer.train()` to start the finetuning process. - **Model Saving**: Push the finetuned model to Hugging Face Hub. - **Metadata Logging**: Log training parameters and hardware information. - **Model Rehydration**: Save and reload the trained model to handle errors. #### Simplified Code Snippet: ```python # Load the base model model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) # Define the loss function train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) # Prepare the training dataset train_dataset = load_dataset("json", data_files=train_dataset_path) # Set up the training arguments args = SentenceTransformerTrainingArguments(...) # Create the trainer trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) # Start training trainer.train() # Save the finetuned model trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` The finetuning process enhances the model's performance across different embedding sizes, and the final model is versioned and tracked in ZenML for observability. The pipeline concludes with an evaluation of the embeddings and visualization of results. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-generation.md === ### Summary of Documentation for Synthetic Data Generation with Distilabel **Objective**: Generate synthetic data to fine-tune embeddings using the existing dataset of technical documentation from ZenML. **Dataset**: The dataset consists of `page_content` chunks and their source URLs, available on Hugging Face. The goal is to pair these chunks with generated questions. **Pipeline Overview**: 1. Load the Hugging Face dataset. 2. Use `distilabel` to generate synthetic data. 3. Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. **Synthetic Data Generation**: - **Tool**: `distilabel` allows for scalable generation of synthetic data using LLMs. In this case, `gpt-4o` is used, but other LLMs are supported. - **Pipeline Setup**: - Load the dataset and map `page_content` to `anchor`. - Generate queries using `GenerateSentencePair`, producing both positive and negative queries for each chunk. **Key Code Snippet**: ```python @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) with distilabel.pipeline.Pipeline(name="generate_embedding_queries") as pipeline: load_dataset = LoadDataFromHub(output_mappings={"page_content": "anchor"}) generate_sentence_pair = GenerateSentencePair(triplet=True, action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context) load_dataset >> generate_sentence_pair train_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "train"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) test_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "test"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` **Data Annotation with Argilla**: - After generating synthetic data, it is pushed to Argilla for inspection and annotation. - Additional metadata includes: - `parent_section`: Documentation section. - `token_count`: Number of tokens in the chunk. - Similarity metrics between queries. **Key Code Snippet for Formatting Data**: ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") batch["anchor-vector"] = model.encode(batch["anchor"]).tolist() # Similarity calculations omitted for brevity return batch ``` **Next Steps**: After data exploration and annotation in Argilla, proceed to fine-tune the embeddings using the generated dataset, assuming the quality is sufficient. This concise summary captures the essential technical details and steps involved in generating synthetic data using `distilabel` for embedding fine-tuning. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetuned-embeddings.md === ### Summary of Documentation on Evaluating Finetuned Embeddings This documentation outlines the process of evaluating finetuned embeddings against base embeddings using the ZenML framework. The evaluation utilizes the `MatryoshkaLoss` function, and the results are logged for further analysis. #### Key Functions and Code 1. **Model Evaluation Function**: - Evaluates a given model using a specified dataset. ```python def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: evaluator = get_evaluator(dataset=dataset, model=model) return evaluator(model) ``` 2. **Base Model Evaluation Step**: - Initializes the base model and evaluates it on the dataset, logging results as metadata. ```python @step def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") results = evaluate_model(dataset=dataset, model=model) base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} log_model_metadata(metadata={"base_model_eval": base_model_eval}) return results ``` #### Result Logging and Visualization - Evaluation results are stored as a dictionary, versioned, and saved in the artifact store. Visualization can be done using `PIL.Image` or `matplotlib`, allowing for comparison between base and finetuned model evaluations. #### Insights and Recommendations - The finetuned embeddings show improved recall, but further data refinement may be necessary for better performance. It is suggested to optimize the training data by removing low-signal logs. #### Model Control Plane - The Model Control Plane provides a unified interface to inspect artifacts, models, and metadata, allowing for easy comparison of evaluation metrics and model versions. #### Next Steps - After evaluation, the finetuned embeddings can be integrated into the original RAG pipeline for further testing. The documentation also references ongoing work on LLM finetuning and deployment, with links to related projects and guides. This summary encapsulates the essential technical details and steps for evaluating finetuned embeddings, ensuring clarity and conciseness while retaining critical information. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === **Summary: Finetuning Embeddings on Synthetic Data for Improved Retrieval Performance** This documentation outlines the process of optimizing embedding models using synthetic data generation and human feedback to enhance retrieval performance in a RAG (Retrieval-Augmented Generation) pipeline. Initially, off-the-shelf embeddings are used, which serve as a baseline. However, finetuning these embeddings on domain-specific data can significantly boost performance. **Key Steps:** 1. **Generate Synthetic Data**: Utilize `distilabel` for scalable synthetic data generation. 2. **Finetune Embeddings**: Use Sentence Transformers for embedding finetuning. 3. **Evaluate Embeddings**: Assess the finetuned embeddings and leverage ZenML's model control plane for systematic evaluation. **Libraries Used**: - **`distilabel`**: Generates synthetic data and provides AI feedback with LLMs as judges. - **`argilla`**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. Both libraries can be used independently but are more effective when combined. The process can be executed locally or on cloud compute. For complete code and examples, refer to the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === ### Summary of LLM Finetuning Documentation **Objective**: This documentation focuses on finetuning Large Language Models (LLMs) for specific tasks or to enhance performance and cost-efficiency. **Key Learnings**: - Previous sections covered RAG (Retrieval-Augmented Generation) with ZenML, evaluation of RAG systems, reranking for improved retrieval, and finetuning embeddings to enhance RAG systems. - Finetuning is essential when using APIs like OpenAI and Anthropic, particularly when custom data is involved. **When to Finetune LLMs**: - Improve response generation in specific formats. - Enhance understanding of domain-specific terminology. - Reduce prompt length for consistent outputs. - Follow specific patterns or protocols that are complex to encode in prompts. - Optimize for latency by minimizing context window requirements. **Guide Structure**: 1. [Finetuning in 100 lines of code](finetuning-100-loc.md) 2. [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) 3. [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) 4. [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) 5. [Evaluation for finetuning](evaluation-for-finetuning.md) 6. [Deploying finetuned models](deploying-finetuned-models.md) 7. [Next steps](next-steps.md) **Important Notes**: - The guide does not follow a specific use case for finetuning. - Understanding the rationale for finetuning, performance evaluation, and data selection is crucial. - Example code is available in the [`llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) and can be executed locally (with a GPU) or on cloud platforms. This summary encapsulates the essential points and technical details necessary for understanding the finetuning of LLMs. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md === ### Summary: LLM Fine-Tuning Pipeline Implementation This documentation provides a concise guide to implementing a fine-tuning pipeline for a language model (LLM) in approximately 100 lines of code. The example uses the TinyLlama model (1.1B parameters) and demonstrates the following key steps: 1. **Installation**: Required packages can be installed using: ```bash pip install datasets transformers torch accelerate>=0.26.0 ``` 2. **Dataset Preparation**: A small instruction-tuning dataset is created with clear input-output pairs: ```python def prepare_dataset() -> Dataset: data = [ {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity..."}, {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures..."}, {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees..."} ] return Dataset.from_list(data) ``` 3. **Tokenization**: The dataset is formatted and tokenized: ```python def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: formatted_text = f"### Instruction: {example['instruction']}\n### Response: {example['response']}" return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) ``` 4. **Model Fine-Tuning**: The model is fine-tuned using specified training parameters: ```python training_args = TrainingArguments( output_dir="./zenml-world-model", num_train_epochs=3, per_device_train_batch_size=1, gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=10, save_total_limit=2, ) ``` 5. **Response Generation**: The fine-tuned model generates responses based on new prompts: ```python def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer) -> str: formatted_prompt = f"### Instruction: {prompt}\n### Response:" inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=128, temperature=0.7) return tokenizer.decode(outputs[0], skip_special_tokens=True) ``` 6. **Execution**: The main function fine-tunes the model and tests it with sample prompts. ### Key Points: - **Model and Dataset**: Uses TinyLlama for instruction-tuning with a small dataset. - **Training Configuration**: 3 epochs, batch size of 1, learning rate of 2e-4, and mixed precision training. - **Limitations**: The implementation is simplified, highlighting the need for larger datasets, better models, and evaluation metrics for production use. ### Next Steps: Future sections will cover advanced topics such as working with larger models, implementing evaluation metrics, and deploying fine-tuned models. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune-llms.md === ### Summary: When to Finetune LLMs This guide provides an overview of when and why to finetune large language models (LLMs) on custom data. Key points include: - **Not a Universal Solution**: Finetuning is not suitable for every problem and may introduce technical debt. It should be considered after exploring other options. - **Diverse Applications**: LLMs can be used beyond chatbot interfaces, often with lower failure rates in non-chatbot applications. - **Final Step in Experimentation**: Finetuning should be the last resort after ruling out alternatives like smaller models or Retrieval-Augmented Generation (RAG). ### When to Finetune Finetuning is beneficial in the following scenarios: 1. **Domain-Specific Knowledge**: Necessary for specialized fields (e.g., medical, legal) not well-covered by the base model. 2. **Consistent Style/Format**: Required for specific output formats, such as code generation. 3. **Improved Accuracy**: Needed for critical tasks where higher precision is essential. 4. **Proprietary Information**: When dealing with sensitive data that cannot be sent to external APIs. 5. **Custom Instructions**: Repeated prompts can be integrated into the model for efficiency. 6. **Efficiency**: Can lead to better performance with shorter prompts, reducing costs and latency. ### Decision Flowchart ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering
sufficient?} B -->|Yes| C[Use prompt engineering] B -->|No| D{Is it primarily a
knowledge retrieval
problem?} D -->|Yes| E{Is real-time data
access needed?} E -->|Yes| F[Use RAG] E -->|No| G{Is data volume
very large?} G -->|Yes| H[Consider hybrid:
RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow,
specific task?} I -->|Yes| J{Can a smaller
specialized model
handle it?} J -->|Yes| K[Use smaller model] J -->|No| L[Consider finetuning] I -->|No| M{Do you need
consistent style
or format?} M -->|Yes| L M -->|No| N{Is deep domain
expertise required?} N -->|Yes| O{Is the domain
well-represented in
base model?} O -->|Yes| P[Use base model] O -->|No| L N -->|No| Q{Is data
proprietary/sensitive?} Q -->|Yes| R{Can you use
API solutions?} R -->|Yes| S[Use API solutions] R -->|No| L Q -->|No| S ``` ### Alternatives to Finetuning Consider these alternatives before finetuning: - **Prompt Engineering**: Effective prompts can yield good results without finetuning. - **RAG**: Often more effective for specific knowledge bases. - **Smaller Models**: Task-specific smaller models may outperform finetuned LLMs. - **API Solutions**: For non-sensitive data, API-based solutions can be simpler and cost-effective. In conclusion, finetuning should be approached with caution, ensuring simpler solutions are exhausted first. The next section will cover practical considerations for finetuning LLMs. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === # Summary of Finetuning LLMs Documentation ## Overview Finetuning large language models (LLMs) tailors their capabilities to specific tasks and datasets. This guide outlines steps for selecting a use case, gathering data, choosing a base model, and evaluating finetuning success. ## Quick Assessment Questions Before starting, consider: 1. **Define Success**: Use measurable metrics (e.g., "95% accuracy in extracting order IDs"). 2. **Data Readiness**: Ensure data is prepared (e.g., "1000 labeled support tickets"). 3. **Task Consistency**: Aim for consistent tasks (e.g., "Convert email to 5 specific fields"). 4. **Human Verification**: Ensure correctness can be verified (e.g., "Check if extracted date matches document"). ## Picking a Use Case Choose a small, manageable use case that cannot be easily solved by non-LLM methods. Examples include: - **Good Use Cases**: Structured data extraction, domain-specific classification, standardized response generation. - **Challenging Use Cases**: Open-ended chat, creative writing, general knowledge QA. ## Picking Data Select data closely aligned with your use case to minimize annotation effort. Aim for hundreds to thousands of examples. Examples of reusable data include: - Customer support email responses. - Manually extracted metadata. ### Good vs. Not-So-Good Use Cases | Good Use Cases | Why It Works | Example | Data Requirements | |----------------|--------------|---------|-------------------| | Structured Data Extraction | Clear inputs/outputs | Extracting order details | 500-1000 annotated emails | | Domain-Specific Classification | Well-defined categories | Categorizing support tickets | 1000+ labeled examples | | Standardized Response Generation | Consistent format | Generating troubleshooting responses | 500+ pairs of queries/responses | ### Success Indicators Evaluate your use case using these indicators: | Indicator | Good Sign | Warning Sign | |-----------|-----------|--------------| | Task Scope | "Extract purchase date" | "Handle all inquiries" | | Output Format | Structured JSON | Free-form text | | Data Availability | 500+ examples ready | Need to create examples | | Evaluation Method | Field-by-field metrics | User feedback | | Business Impact | "Save 10 hours of data entry" | "Make AI more human-like" | ## Picking a Base Model Select a base model based on your use case: - **Llama 3.1-8B**: Structured data extraction, requires 16GB GPU RAM. - **Llama 3.1-70B**: Complex reasoning, requires 80GB GPU RAM. - **Mistral 7B**: General text generation, requires 16GB GPU RAM. - **Phi-2**: Lightweight tasks, requires 8GB GPU RAM. ### Quick Model Selection Matrix ```mermaid graph TD A[Choose Your Task] --> B{Structured Output?} B -->|Yes| C[Llama-8B Base] B -->|No| D{Complex Reasoning?} D -->|Yes| E[Llama-70B Base] D -->|No| F{Resource Constrained?} F -->|Yes| G[Phi-2] F -->|No| H[Mistral-7B] ``` ## Evaluating Success Define success metrics early to measure progress effectively. For structured data extraction, consider: - Accuracy of extracted fields. - Precision and recall for specific field types. - Processing time per document. - Error rates on edge cases. ## Next Steps With a clear understanding of scoping, data selection, and evaluation, proceed to the technical implementation in the next section, which covers practical finetuning using the Accelerate library. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === # Summary of LLM Finetuning Evaluations Documentation ## Overview Evaluations for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. Starting with small, incremental evaluations helps establish a baseline for performance and facilitates early issue detection. ## Motivation and Benefits Key motivations for implementing evaluations include: 1. **Prevent Regressions**: Ensure new changes do not harm existing functionality. 2. **Track Improvements**: Quantify model enhancements over iterations. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. A robust evaluation strategy leads to more reliable and performant LLMs. ## Types of Evaluations Evaluations can be generic or custom. Custom evaluations focus on specific use cases and can be categorized into: 1. **Success Modes**: Desired outputs, such as correct formatting and appropriate responses. 2. **Failure Modes**: Undesired outputs, including hallucinations, incorrect formats, and biased responses. ### Example Code for Custom Evaluations ```python from my_library import query_llm good_responses = { "what are the best salads available at the food court?": ["caesar", "italian"], "how late is the shopping center open until?": ["10pm", "22:00", "ten"] } for question, answers in good_responses.items(): llm_response = query_llm(question) assert any(answer in llm_response for answer in answers) bad_responses = { "who is the manager of the shopping center?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): llm_response = query_llm(question) assert not any(answer in llm_response for answer in answers) ``` ## Generalized Evals and Frameworks Generalized evaluation frameworks help structure evaluations and provide standardized metrics. They should complement custom evaluations. Recommended frameworks include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) - [langcheck](https://github.com/citadel-ai/langcheck) - [nervaluate](https://github.com/MantisAI/nervaluate) Integrating these frameworks into your ZenML pipeline can streamline evaluation processes. ## Data and Tracking Regular analysis of inference data is crucial for identifying patterns and improving model performance. Implement comprehensive logging early and consider using LLM evaluation frameworks for structured data collection. Recommended tools include: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) Creating simple dashboards to visualize core metrics can effectively monitor progress and guide future iterations. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/next-steps.md === # Next Steps After iterating on your finetuned model, consider the following key areas: - Identify factors that improve or degrade model performance. - Determine the minimum viable model size. - Assess the feasibility of iteration time within your hardware constraints. - Ensure the finetuned model effectively addresses your business use case. Next stages may involve: - Scaling for more users or real-time scenarios. - Meeting critical accuracy requirements, potentially necessitating a larger model. - Integrating LLM finetuning into your production systems, including monitoring and evaluation. While it may be tempting to switch to larger models, enhancing your dataset is often more impactful, especially if you started with limited examples. Focus on improving data quality before upgrading to more powerful models. ## Resources Recommended resources for further learning on LLM finetuning: - [Mastering LLMs Course](https://parlance-labs.com/education/) - Video course by Hamel Husain and Dan Becker. - [Phil Schmid's blog](https://www.philschmid.de/) - Examples of LLM finetuning techniques. - [Sam Witteveen's YouTube channel](https://www.youtube.com/@samwitteveenai) - Videos on finetuning and prompt engineering. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/deploying-finetuned-models.md === # Deployment Options for Finetuned LLMs Deploying a finetuned LLM is essential for integrating it into real-world applications. This process requires careful planning to ensure performance, reliability, and cost-effectiveness. ## Deployment Considerations Key factors influencing deployment include: - **Resource Requirements**: LLMs demand significant RAM, processing power, and specialized hardware. Choose hardware that balances performance and cost based on your use case. - **Real-Time Performance**: Consider immediate response needs, failover scenarios, and user load modeling. - **Streaming vs. Non-Streaming**: Each approach has trade-offs in latency and resource use. - **Optimization Techniques**: Techniques like quantization can reduce resource consumption but require evaluation to avoid compromising model performance. ## Deployment Options and Trade-offs 1. **Roll Your Own**: Set up and manage your own infrastructure for maximum control and customization, typically using Docker (e.g., FastAPI). 2. **Serverless Options**: Scalable and cost-efficient, but may introduce latency due to cold starts. 3. **Always-On Options**: Minimizes latency but incurs costs even during idle periods. 4. **Fully Managed Solutions**: Simplifies deployment but may reduce flexibility and increase costs. Consider your team's expertise, budget, load patterns, and specific requirements when selecting a deployment option. ## Deployment with vLLM and ZenML [vLLM](https://github.com/vllm-project/vllm) allows high-throughput, low-latency LLM deployment. ZenML integrates with vLLM for easy deployment. ```python from zenml import pipeline from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> VLLMDeploymentService: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` The `model` argument can be a local path or a Hugging Face Hub model ID, enabling batch inference via an OpenAI-compatible API. ## Cloud-Specific Deployment Options - **AWS**: Use Amazon SageMaker for managed LLM deployment or AWS Lambda with API Gateway for serverless options. Amazon ECS or EKS with Fargate provides more control but adds complexity. - **GCP**: Google Cloud AI Platform offers managed services similar to SageMaker. Cloud Run is a serverless option, while GKE allows for more control over deployments. ## Architectures for Real-Time Engagement For real-time customer engagement, consider: - **Load Balancing**: Deploy multiple model instances behind a load balancer with auto-scaling. - **Caching**: Use solutions like Redis to store frequent responses. - **Asynchronous Architecture**: Implement message queues (e.g., Amazon SQS) for managing request backlogs. - **Edge Computing**: Use AWS Lambda@Edge or CloudFront Functions for reduced latency. ## Reducing Latency and Increasing Throughput To optimize for low latency and high throughput: - **Model Optimization**: Use quantization and distillation techniques. - **Hardware Acceleration**: Leverage GPU instances for faster inference. - **Request Batching**: Process multiple inputs in a single forward pass. - **Monitoring**: Continuously measure and profile your inference pipeline to identify bottlenecks. ## Monitoring and Maintenance Post-deployment, focus on: 1. **Evaluation Failures**: Regularly assess model performance. 2. **Latency Metrics**: Monitor response times. 3. **Load Patterns**: Analyze user interactions for scaling decisions. 4. **Data Analysis**: Identify trends or biases in model inputs/outputs. Ensure compliance with data protection regulations in your logging practices. By considering these deployment strategies and monitoring practices, you can maintain optimal performance of your finetuned LLM. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md === # Summary: Finetuning an LLM with Accelerate and PEFT This documentation outlines the process of finetuning a language model (LLM) using the Viggo dataset, which contains over 5,000 pairs of structured meaning representations and their corresponding natural language descriptions for video game dialogues. The goal is to train models to generate fluent responses from structured inputs. ## Finetuning Pipeline The finetuning pipeline consists of the following steps: 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model. 4. **evaluate_finetuned**: Evaluate the finetuned model. 5. **promote**: Promote the best model to "staging" in the Model Control Plane. For initial experiments, it's recommended to use smaller models (e.g., Llama 3.1 at ~8B parameters) for quicker iterations. ## Implementation Details The `prepare_data` step loads and tokenizes the dataset. Care should be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs is advised for verification. The finetuning process utilizes the `accelerate` library for multi-GPU support. The essential code for the finetuning step is as follows: ```python model = load_base_model(base_model_id, use_accelerate=use_accelerate) trainer = transformers.Trainer( model=model, train_dataset=tokenized_train_dataset, eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, max_steps=max_steps, learning_rate=lr, logging_dir="./logs", evaluation_strategy="steps", do_eval=True, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), callbacks=[ZenMLCallback(accelerator=accelerator)], ) ``` ### Evaluation Metrics Evaluation uses the `evaluate` library to compute ROUGE scores, which measure the overlap and similarity between generated and reference texts through various metrics (ROUGE-N, ROUGE-L, ROUGE-W, ROUGE-S). ## Using ZenML Accelerate Decorator ZenML offers the `@run_with_accelerate` decorator for streamlined distributed training without altering model logic: ```python @run_with_accelerate(num_processes=4, multi_gpu=True, mixed_precision='bf16') @step def finetune_step(...): model = load_base_model(base_model_id, use_accelerate=True) trainer = transformers.Trainer(...) trainer.train() return trainer.model ``` ### Docker Configuration Ensure the Docker environment is set up with CUDA support and necessary dependencies: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def finetuning_pipeline(...): # Your pipeline steps here ``` ## Dataset Iteration Careful attention to input data formatting is crucial. Poor performance post-finetuning may indicate data issues. Regularly inspect data and consider augmenting or synthetically generating it if necessary. Evaluations should be established early to gauge model performance and guide further refinements. ### Future Considerations As you refine your model, consider: - Better evaluation methods - Model deployment strategies - Integration within existing production architectures - The feasibility of smaller models for specific use cases This documentation serves as a guide for finetuning LLMs effectively while emphasizing the importance of data quality and evaluation in the process. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md === ### Summary: Ingesting and Preprocessing Data for RAG Pipelines with ZenML **Overview**: This documentation outlines the process of ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. It emphasizes the importance of managing data ingestion effectively, including scraping, loading, and preprocessing documents. #### Data Ingestion 1. **Purpose**: Ingest data (documents and metadata) for training retriever and generator models. 2. **ZenML Integration**: ZenML integrates with various tools to facilitate data ingestion, preprocessing, and indexing. 3. **URL Scraping**: A ZenML step is created to scrape URLs from ZenML documentation. **Code Example**: ```python from typing import List from typing_extensions import Annotated from zenml import log_artifact_metadata, step from steps.url_scraping_utils import get_all_pages @step def url_scraper( docs_url: str = "https://docs.zenml.io", repo_url: str = "https://github.com/zenml-io/zenml", website_url: str = "https://zenml.io", ) -> Annotated[List[str], "urls"]: docs_urls = get_all_pages(docs_url) log_artifact_metadata({"count": len(docs_urls)}) return docs_urls ``` 4. **Loading Documents**: The `unstructured` library is used to parse HTML and extract text from the scraped URLs. **Code Example**: ```python from typing import List from unstructured.partition.html import partition_html from zenml import step @step def web_url_loader(urls: List[str]) -> List[str]: document_texts = [] for url in urls: elements = partition_html(url=url) document_texts.append("\n\n".join(map(str, elements))) return document_texts ``` #### Data Preprocessing 1. **Chunking**: After loading documents, they are split into smaller chunks for efficient processing by the LLM. The chunk size and overlap are crucial parameters. **Code Example**: ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step logging.basicConfig(level=logging.INFO) @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) return split_documents(documents, chunk_size=500, chunk_overlap=50) ``` 2. **Considerations**: The choice of chunk size (e.g., 500 characters with 50 characters overlap) depends on the data structure. Additional preprocessing steps may include text cleaning or metadata extraction. #### Additional Resources - For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and the specific [steps code](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). This summary captures the essential steps and code snippets necessary for understanding data ingestion and preprocessing in RAG pipelines using ZenML, ensuring that critical information is retained while maintaining conciseness. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md === ### Summary of RAG Inference Documentation This documentation outlines the process of using Retrieval-Augmented Generation (RAG) components to generate responses to user prompts. It focuses on querying an index store to retrieve relevant documents and using a language model (LLM) to generate responses based on those documents. #### Key Components and Workflow 1. **Query Execution**: Users can execute queries via a command line interface. Example command: ```bash python run.py --rag-query "how do I use a custom materializer inside my own zenml steps?" --model=gpt4 ``` 2. **Inference Function**: The core function for processing input and generating responses is `process_input_with_retrieval`: ```python def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) system_message = "You are a friendly chatbot..." messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": f"```\n{input}\n```"}, {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, ] return get_completion_from_messages(messages, model=model) ``` 3. **Document Retrieval**: The function `get_topn_similar_docs` retrieves the most relevant documents based on the query embedding: ```python def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: embedding_array = np.array(query_embedding) cur = conn.cursor() cur.execute("SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT %s", (embedding_array, n)) return cur.fetchall() ``` This uses the `pgvector` extension for PostgreSQL to efficiently order documents by similarity. 4. **LLM Response Generation**: The function `get_completion_from_messages` generates a response using the specified LLM: ```python def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) return completion_response.choices[0].message.content ``` The `litellm` library allows for flexibility in using different LLMs without needing separate implementations. #### Conclusion The documentation provides a foundational understanding of a basic RAG inference pipeline, emphasizing the retrieval of relevant text chunks based on user queries and generating responses using LLMs. Future improvements may include fine-tuning embeddings for better performance with diverse document sets. For complete code and further exploration, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database.md === ### Summary: Storing Embeddings in a Vector Database This documentation outlines the process of storing embeddings in a vector database, specifically PostgreSQL, to facilitate efficient retrieval based on similarity to queries. #### Key Points: - **Purpose**: Store embeddings to avoid generating them repeatedly for document retrieval. - **Database Choice**: PostgreSQL is used for its scalability and efficiency with high-dimensional vectors. Other vector databases can also be utilized. - **Setup Instructions**: For PostgreSQL setup, refer to the [repository instructions](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). #### Connection and Interaction: - Use the `psycopg2` package to connect to PostgreSQL and execute raw SQL statements. #### Core Functionality: The `index_generator` function performs the following: 1. **Connect to Database**: Establish a connection to PostgreSQL. 2. **Create Extensions**: Ensure the `vector` extension is installed. 3. **Create Table**: Define the `embeddings` table if it does not exist: ```sql CREATE TABLE IF NOT EXISTS embeddings ( id SERIAL PRIMARY KEY, content TEXT, token_count INTEGER, embedding VECTOR(EMBEDDING_DIMENSIONALITY), filename TEXT, parent_section TEXT, url TEXT ); ``` 4. **Insert Data**: Add embeddings and associated documents only if they are not already present: ```python cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (content,)) if count == 0: cur.execute("INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)", (content, token_count, embedding, filename, parent_section, url)) ``` 5. **Index Creation**: Calculate index parameters and create an index for efficient querying: ```sql CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists}); ``` #### Indexing Strategy: - The number of lists for the index is determined based on the number of records, with a minimum of 10 and a maximum of the square root of the number of records. - The `ivfflat` method with `vector_cosine_ops` is used for indexing, suitable for cosine distance similarity searches. #### Performance Considerations: - Running the embedding storage step on a GPU-enabled machine is recommended for larger datasets to enhance performance. #### Conclusion: Once embeddings are stored, the next step is to retrieve relevant documents based on queries, enabling the development of efficient question-answering systems. For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md === ### Summary of RAG Pipeline Implementation This documentation provides a concise guide to implementing a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: 1. **Data Loading**: Uses a fictional dataset about 'ZenML World' as the corpus. 2. **Text Processing**: Splits text into chunks and tokenizes it (converts to words). 3. **Query Handling**: Takes a user query and retrieves the most relevant text chunks. 4. **Response Generation**: Utilizes OpenAI's GPT-3.5 model to generate answers based on the retrieved chunks. ### Key Functions - **`preprocess_text(text)`**: - Converts text to lowercase, removes punctuation, and trims whitespace. - **`tokenize(text)`**: - Tokenizes preprocessed text into words. - **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - Computes Jaccard similarity between the query and corpus chunks to find the top N relevant chunks. - **`answer_question(query, corpus, top_n=2)`**: - Retrieves relevant chunks and generates an answer using OpenAI's API. ### Example Corpus A sample corpus includes descriptions of various creatures in 'ZenML World', such as: - **Plasma Phoenixes**: Majestic energy creatures. - **Crystalline Crabs**: Creatures on the prismatic shores. ### Example Queries 1. **Query**: "What are Plasma Phoenixes?" - **Answer**: Describes Plasma Phoenixes as energy creatures soaring above chromatic canyons. 2. **Query**: "What kinds of creatures live on the prismatic shores of ZenML World?" - **Answer**: Mentions crystalline crabs with transparent exoskeletons. 3. **Query**: "What is the capital of Panglossia?" - **Answer**: States that the capital is not mentioned in the context. ### Implementation Details - The similarity measure used is the Jaccard similarity coefficient, calculated as the size of the intersection divided by the size of the union of the query and chunk token sets. - The implementation is basic and not optimized for performance, serving primarily as an illustrative example. ### Code Snippet ```python import os import re import string from openai import OpenAI def preprocess_text(text): return re.sub(r"\s+", " ", text.lower().translate(str.maketrans("", "", string.punctuation))).strip() def tokenize(text): return preprocess_text(text).split() def retrieve_relevant_chunks(query, corpus, top_n=2): query_tokens = set(tokenize(query)) similarities = [(chunk, len(query_tokens.intersection(tokenize(chunk))) / len(query_tokens.union(tokenize(chunk)))) for chunk in corpus] return [chunk for chunk, _ in sorted(similarities, key=lambda x: x[1], reverse=True)[:top_n]] def answer_question(query, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(query, corpus, top_n) if not relevant_chunks: return "I don't have enough information to answer the question." context = "\n".join(relevant_chunks) client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[{"role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}"}, {"role": "user", "content": query}], model="gpt-3.5-turbo", ) return chat_completion.choices[0].message.content.strip() # Sample corpus corpus = [preprocess_text(sentence) for sentence in [ "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", # Additional sentences... ]] # Example questions print(answer_question("What are Plasma Phoenixes?", corpus)) print(answer_question("What kinds of creatures live on the prismatic shores of ZenML World?", corpus)) print(answer_question("What is the capital of Panglossia?", corpus)) ``` This summary captures the essential components and functionality of the RAG pipeline while maintaining clarity and conciseness. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md === ### Generating Embeddings for Retrieval This section outlines the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data in a high-dimensional space, allowing similar items to be located more easily. #### Key Points: - **Purpose of Embeddings**: They improve retrieval by enabling the identification of relevant data chunks based on semantic similarity rather than simple keyword matching. - **Library Used**: The `sentence-transformers` library is utilized to generate embeddings, specifically using the pre-trained model `sentence-transformers/all-MiniLM-L12-v2`, which produces 384-dimensional vectors. #### Code for Generating Embeddings: ```python from typing import Annotated, List import numpy as np from sentence_transformers import SentenceTransformer from structures import Document from zenml import ArtifactConfig, log_artifact_metadata, step @step def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Document], ArtifactConfig(name="documents_with_embeddings")]: model = SentenceTransformer("sentence-transformers/all-MiniLM-L12-v2") log_artifact_metadata(artifact_name="embeddings", metadata={"embedding_type": "sentence-transformers/all-MiniLM-L12-v2", "embedding_dimensionality": 384}) embeddings = model.encode([doc.page_content for doc in split_documents]) for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding return split_documents ``` #### Document Model Update: The `Document` model is updated to include an `embedding` attribute for storing generated embeddings, facilitating their use in retrieval. #### Dimensionality Reduction and Visualization: To visualize embeddings, dimensionality reduction techniques like UMAP and t-SNE can be employed. This helps in understanding how similar chunks cluster based on semantic meaning. #### Visualization Code Example: ```python import matplotlib.pyplot as plt import numpy as np from sklearn.manifold import TSNE import umap def visualize_embeddings(embeddings, parent_sections, method='tsne'): if method == 'tsne': embeddings_2d = TSNE(n_components=2, random_state=42).fit_transform(embeddings) else: # method == 'umap' embeddings_2d = umap.UMAP(n_components=2, random_state=42).fit_transform(embeddings) plt.figure(figsize=(8, 8)) for section in set(parent_sections): mask = [section == ps for ps in parent_sections] plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], label=section) plt.title(f"{method.upper()} Visualization") plt.legend() plt.show() ``` #### Conclusion: The embeddings are stored as artifacts in the ZenML artifact store, allowing for modularity in the pipeline. Future steps will involve storing these embeddings in a vector database for efficient retrieval during inference. For complete code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md === ### Summary of Retrieval-Augmented Generation (RAG) **Overview**: Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to inform response generation. This method addresses LLM limitations, such as incorrect responses and token capacity, while improving contextual understanding. **RAG Pipeline Process**: 1. **Retriever**: Identifies relevant documents from a corpus. 2. **Generator**: Produces responses based on retrieved documents. 3. **Applications**: Particularly effective for tasks like question answering, summarization, and dialogue generation, where responses need to be contextually grounded. **Benefits of RAG**: - **Contextual Relevance**: Reduces incorrect responses by grounding generation in relevant information. - **Token Efficiency**: Focuses on a smaller set of documents, alleviating token limitations. - **Cost-Effectiveness**: More economical than pure generation models, especially in resource-constrained environments. **When to Use RAG**: - Ideal for generating long-form responses requiring contextual understanding. - Suitable for users beginning with LLMs due to lower data and computational demands. **Integration with ZenML**: - ZenML facilitates the creation of RAG pipelines, combining retrieval and generation strengths. - **Key Features**: - **Reproducibility**: Easily update and rerun pipelines while preserving previous artifact versions. - **Scalability**: Deploy on cloud providers to manage larger document corpora. - **Artifact Tracking**: Monitor pipeline performance and debug issues through a dashboard. - **Maintainability**: Modular pipeline structure allows easy updates and experimentation. - **Collaboration**: Share pipelines and insights with team members. **Future Directions**: ZenML supports advanced RAG functionalities, including fine-tuning embeddings and LLMs, and reranking retrieved documents. This summary encapsulates the essential aspects of RAG and its integration with ZenML, providing a foundational understanding for further exploration. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md === ### Summary of RAG Pipelines with ZenML Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models to enhance the capabilities of Large Language Models (LLMs). This guide outlines how to set up RAG pipelines using ZenML, covering key components and processes: 1. **Purpose of RAG**: RAG addresses the limitations of LLMs, which may generate incorrect responses, particularly with ambiguous prompts, and have constraints on the amount of text they can process. While some LLMs, like Google's Gemini 1.5 Pro, can handle up to 1 million tokens, many open-source models manage significantly less. 2. **Key Topics Covered**: - **Problem Solving**: Understanding why RAG is necessary. - **Data Ingestion and Preprocessing**: Techniques for preparing data for the RAG pipeline. - **Embeddings**: Utilizing embeddings to represent data, forming the basis for retrieval. - **Vector Database**: Storing embeddings in a vector database for efficient retrieval. - **Artifact Tracking**: Managing RAG-related artifacts using ZenML. 3. **Integration**: The guide culminates in demonstrating how all components work together for basic RAG inference. This overview serves as a foundation for implementing RAG pipelines effectively with ZenML. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md === ### Implementing Reranking in ZenML This documentation outlines how to integrate a reranking mechanism into an existing RAG (Retrieval-Augmented Generation) pipeline using the `rerankers` package. The reranker reorders retrieved documents based on their relevance to the query. #### Reranking Overview - **Package**: Use the [`rerankers`](https://github.com/AnswerDotAI/rerankers/) package, which provides a lightweight interface for various reranking models. - **Functionality**: The reranker takes a query and a list of documents, returning them reordered by relevance. #### Example Code ```python from rerankers import Reranker ranker = Reranker('cross-encoder') texts = [ "I like to play soccer", "I like to play football", "War and Peace is a great book", "I love dogs", "Ginger cats aren't very smart", "I like to play basketball", ] results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` **Output Example**: ```python RankedResults( results=[ Result(doc_id=5, text='I like to play basketball', score=-0.465, rank=1), Result(doc_id=0, text='I like to play soccer', score=-0.735, rank=2), ... ], query="What's your favorite sport?", has_scores=True ) ``` #### Reranking Function A helper function can be created to rerank documents: ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` #### Querying Similar Documents The reranking function can be integrated into a document querying function: ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: embedded_question = get_embeddings(question) db_conn = get_db_conn() num_docs = 20 if use_reranking else returned_sample_size top_similar_docs = get_topn_similar_docs(embedded_question, db_conn, n=num_docs, include_metadata=True) if use_reranking: reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[:returned_sample_size] urls = [doc[1] for doc in reranked_docs_and_urls] else: urls = [doc[1] for doc in top_similar_docs] return (question, url_ending, urls) ``` #### Conclusion This integration allows for improved document retrieval quality by reranking based on relevance. For further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the `eval_retrieval.py` file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md === ### Evaluating Reranking Performance This documentation outlines how to evaluate the performance of a reranking model using ZenML. The evaluation process involves comparing retrieval performance before and after applying reranking, utilizing metrics from the evaluation section. #### Retrieval Evaluation Function The core function for evaluating retrieval performance is `perform_retrieval_evaluation`, which takes a sample size and a flag for reranking usage. It loads a dataset, samples it, and checks if the expected URL is present in the retrieved results. ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) failures = sum(1 for item in sampled_dataset if not check_retrieval(item, use_reranking)) return round((failures / len(sampled_dataset)) * 100, 2) def check_retrieval(item, use_reranking): question = item["generated_questions"][0] url_ending = item["filename"].split("/")[-1] _, _, urls = query_similar_docs(question, url_ending, use_reranking) return url_ending in urls ``` #### Evaluation Steps Two steps are defined to evaluate retrieval performance with and without reranking: ```python @step def retrieval_evaluation_full(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=False) @step def retrieval_evaluation_full_with_reranking(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` Both functions log the failure rates, which can be reviewed in logs for specific failure cases. #### Visualization of Results ZenML allows for visual representation of evaluation results. The `visualize_evaluation_results` function creates a bar chart of various evaluation metrics. ```python @step(enable_cache=False) def visualize_evaluation_results(*metrics) -> Optional[Image.Image]: scores = [score / 20 for score in metrics[:5]] + list(metrics[5:]) labels = ["Small Retrieval Eval Failure Rate", "Full Retrieval Eval Failure Rate", ...] fig, ax = plt.subplots(figsize=(10, 6)) ax.barh(np.arange(len(labels)), scores, align="center") ax.set_yticks(np.arange(len(labels))) ax.set_yticklabels(labels) ax.set_xlabel("Score") ax.set_xlim(0, 5) ax.set_title(f"Evaluation Metrics for {get_step_context().pipeline_run.name}") plt.tight_layout() buf = io.BytesIO() plt.savefig(buf, format="png") buf.seek(0) return Image.open(buf) ``` #### Running the Evaluation Pipeline To execute the evaluation pipeline, clone the project repository and navigate to the appropriate directory: ```bash git clone https://github.com/zenml-io/zenml-projects.git cd zenml-projects/llm-complete-guide ``` Run the evaluation pipeline with: ```bash python run.py --evaluation ``` This command will execute the evaluation and output results to the ZenML dashboard, allowing for progress and logs inspection. ### Conclusion This documentation provides a structured approach to evaluate and visualize the performance of a reranking model in ZenML, highlighting the importance of analyzing both retrieval performance and failure rates to enhance model effectiveness. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/reranking.md === **Summary: Adding Reranking to RAG Inference in ZenML** Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving document quality and relevance. This section details the integration of a reranker into the RAG inference pipeline in ZenML. **Key Points:** - Reranking is an optional enhancement to the existing workflow, which includes data ingestion, preprocessing, embeddings generation, and retrieval. - It aims to boost the performance of the retrieval system by refining the order of documents based on specific metrics. - Improved document relevance can lead to better responses from the LLM. **Workflow Overview:** - Reranking is positioned after the initial retrieval process, serving as a supplementary step to optimize results. This concise addition can significantly enhance the effectiveness of the retrieval system, making it a valuable consideration for implementation. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md === ## Reranking Overview ### What is Reranking? Reranking refines the initial ranking of documents retrieved by a system, crucial for enhancing the relevance and quality of documents in Retrieval-Augmented Generation (RAG). The initial retrieval often uses sparse methods like BM25 or TF-IDF, which focus on lexical matching but may miss semantic context. Rerankers reorder these documents by evaluating additional features such as semantic similarity and relevance scores, ensuring the LLM accesses the most relevant context for generating responses. ### Types of Rerankers 1. **Cross-Encoders**: - Combine query and document inputs to output a relevance score. - Example: BERT-based models for passage ranking. - Pros: Effective interaction capture. - Cons: Computationally expensive. 2. **Bi-Encoders**: - Use separate encoders for queries and documents, generating independent embeddings. - Pros: More efficient than cross-encoders. - Cons: Less effective at capturing interactions. 3. **Lightweight Models**: - Include distilled models or smaller transformer variants. - Pros: Faster and smaller footprint, suitable for real-time applications. ### Benefits of Reranking in RAG 1. **Improved Relevance**: Identifies the most relevant documents for queries, enhancing LLM context. 2. **Semantic Understanding**: Captures semantic meaning beyond keyword matching, retrieving semantically similar documents. 3. **Domain Adaptation**: Fine-tuned on domain-specific data to enhance performance in targeted industries. 4. **Personalization**: Tailors document retrieval based on user preferences and historical interactions. ### Implementation The next section will cover how to implement reranking in ZenML and integrate it into your RAG inference pipeline. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === ### Summary: Adding Reranking to RAG Inference in ZenML Rerankers enhance retrieval systems utilizing LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details the integration of a reranker into the RAG inference pipeline in ZenML. #### Key Points: - **Purpose of Rerankers**: They reorder retrieved documents to boost relevance and quality, leading to improved LLM responses. - **Workflow Context**: The reranker is an optional enhancement within an established workflow that includes data ingestion, preprocessing, embeddings generation, and retrieval. - **Evaluation Metrics**: Basic metrics are set up to assess retrieval system performance. #### Visual Aid: - A diagram illustrates the reranking process within the overall workflow, emphasizing its optional nature and potential benefits. By incorporating a reranker, users can achieve better retrieval performance, enhancing the overall effectiveness of their LLM applications. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/generation.md === ### Summary of Generation Evaluation in RAG Pipeline #### Overview The generation component of a Retrieval-Augmented Generation (RAG) pipeline is responsible for producing answers based on retrieved context. Evaluating this component is subjective and challenging, but several methods can be employed to assess the quality of generated answers. #### Handcrafted Evaluation Tests - Create examples to verify the inclusion or exclusion of specific terms in generated responses. - Use known mistakes in outputs to guide test creation. - Example tests include checking for supported orchestrators like "Airflow" and "Kubeflow" while excluding unsupported ones like "Flyte" and "Prefect." **Tables for Evaluation:** - **`bad_answers`**: Lists questions with undesirable terms. - **`bad_immediate_responses`**: Captures immediate incorrect answers. - **`good_responses`**: Ensures expected terms are present in correct answers. #### Testing Code Example ```python class TestResult(BaseModel): success: bool question: str keyword: str = "" response: str def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: question = item["question"] bad_words = item["bad_words"] response = process_input_with_retrieval(question, n_items_retrieved) for word in bad_words: if word in response: return TestResult(success=False, question=question, keyword=word, response=response) return TestResult(success=True, question=question, response=response) ``` #### Running Tests Use a test runner to execute the tests and log failures: ```python def run_tests(test_data: list, test_function: Callable) -> float: failures = 0 total_tests = len(test_data) for item in test_data: test_result = test_function(item) if not test_result.success: logging.error(f"Test failed for question: '{test_result.question}'. Found word: '{test_result.keyword}'. Response: '{test_result.response}'") failures += 1 return (failures / total_tests) * 100 ``` #### End-to-End Evaluation Combine various tests to evaluate the generation component: ```python @step def e2e_evaluation() -> Tuple[float, float, float]: failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) failure_rate_bad_immediate_responses = run_tests(bad_immediate_responses, test_response_starts_with_bad_words) failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) return failure_rate_bad_answers, failure_rate_bad_immediate_responses, failure_rate_good_responses ``` #### Automated Evaluation Using Another LLM - Use a separate LLM to evaluate the output of the primary LLM on a scale of 1 to 5 for categories like toxicity, faithfulness, helpfulness, and relevance. - Implement a Pydantic model to validate the evaluation results. **Pydantic Model Example:** ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) faithfulness: conint(ge=1, le=5) helpfulness: conint(ge=1, le=5) relevance: conint(ge=1, le=5) ``` #### LLM Judged Test Function ```python def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: response = process_input_with_retrieval(question, n_items_retrieved) prompt = f"Analyze the following text and context to provide scores for toxicity, faithfulness, helpfulness, and relevance." response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) json_output = response["choices"][0]["message"]["content"].strip() return LLMJudgedTestResult(**json.loads(json_output)) ``` #### Running LLM Judged Tests ```python def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) total_scores = [0, 0, 0, 0] for item in dataset: question = item["generated_questions"][0] context = item["page_content"] result = test_function(question, context) total_scores = [total + getattr(result, attr) for total, attr in zip(total_scores, ['toxicity', 'faithfulness', 'helpfulness', 'relevance'])] return tuple(round(total / sample_size, 3) for total in total_scores) ``` #### Conclusion Evaluating the generation component of a RAG pipeline can be approached through handcrafted tests and automated evaluations using another LLM. The quality of evaluations depends on the LLM used and the comprehensiveness of the tests. Improvements can include retries for JSON errors, batch processing, and using multiple evaluators. For further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/). ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md === ### Summary of RAG Evaluation Implementation This documentation outlines how to evaluate a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of code, building on a previous example. The full code can be found in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation relies on functions from an earlier RAG pipeline. #### Key Components: 1. **Evaluation Data**: A list of questions and their expected answers is defined for testing the RAG pipeline's performance. ```python eval_data = [ {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones..."}, {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, ] ``` 2. **Evaluation Functions**: - **`evaluate_retrieval`**: Checks if any words from the expected answer are present in the retrieved chunks. - **`evaluate_generation`**: Utilizes OpenAI's API to assess the relevance and accuracy of the generated answer. ```python def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[{"role": "system", "content": "You are an evaluation judge..."}, {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?"}], model="gpt-3.5-turbo", ) return chat_completion.choices[0].message.content.strip().lower() == "yes" ``` 3. **Evaluation Process**: The code iterates through the evaluation data, calculates retrieval and generation scores, and computes their accuracies. ```python retrieval_scores = [] generation_scores = [] for item in eval_data: retrieval_scores.append(evaluate_retrieval(item["question"], item["expected_answer"], corpus)) generated_answer = answer_question(item["question"], corpus) generation_scores.append(evaluate_generation(item["question"], item["expected_answer"], generated_answer)) retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` #### Results: The example demonstrates achieving 100% accuracy for both retrieval and generation components, showcasing a high-level approach to RAG evaluation. Further sections will delve into more sophisticated implementations. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md === ### Evaluation in Practice This documentation outlines how to evaluate the performance of a Retrieval-Augmented Generation (RAG) system. The evaluation process is structured as a separate pipeline that runs after the main pipeline, which generates and populates embeddings. This separation allows for clearer focus on embedding generation and evaluation. #### Key Points: - **Separate Evaluation Pipeline**: It’s best practice to evaluate embeddings in a separate pipeline, which can also serve as a gating mechanism for production readiness. - **Local vs. Cloud LLM Judge**: During development, consider using a local LLM judge for faster iterations. For final evaluations, utilize cloud LLMs like Anthropic's Claude or OpenAI's GPT-3.5/4 to assess embedding performance. - **Human Oversight**: Automated evaluation is beneficial but does not replace the need for human review. The LLM judge is costly and time-consuming, necessitating human validation of results. - **Evaluation Frequency**: The evaluation frequency and depth depend on the project’s constraints. Balance the cost of evaluations with the need for rapid iterations. Quick tests (e.g., retrieval system tests) can be run frequently, while more expensive evaluations (e.g., LLM judge) should be less frequent. #### Next Steps: To enhance retrieval performance, consider adding a reranker without retraining embeddings. #### Running the Evaluation Pipeline: 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` 2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions. 3. Run the evaluation pipeline after generating embeddings: ```bash python run.py --evaluation ``` This command executes the evaluation pipeline and outputs results to the console, allowing you to inspect logs and progress in the dashboard. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md === ### Retrieval Evaluation in RAG Pipeline The retrieval component in a Retrieval-Augmented Generation (RAG) pipeline is crucial for finding relevant documents to support the generation component. This section outlines methods to evaluate the performance of the retrieval component, focusing on the accuracy of semantic search. #### Manual Evaluation with Handcrafted Queries A straightforward method for evaluation involves manually creating queries that target specific documents. This process, while time-consuming, helps identify edge cases and assess the retrieval component's effectiveness. **Example Queries:** | Question | URL Ending | |----------|------------| | How do I get going with the Label Studio integration? What are the first steps? | stacks-and-components/component-guide/annotators/label-studio | | How can I write my own custom materializer? | user-guide/advanced-guide/data-management/handle-custom-data-types | | How do I generate embeddings as part of a RAG pipeline when using ZenML? | user-guide/llmops-guide/rag-with-zenml/embeddings-generation | To evaluate, encode the query as a vector and query the PostgreSQL database for similar vectors, checking if the expected URL appears in the top results. **Code for Manual Evaluation:** ```python def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) top_similar_docs_urls = get_topn_similar_docs(embedded_question, get_db_conn(), n=5, only_urls=True) return (question, url_ending, [url[0] for url in top_similar_docs_urls]) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: failures = sum(1 for pair in question_doc_pairs if pair["url_ending"] not in query_similar_docs(pair["question"], pair["url_ending"])[2]) return round((failures / len(question_doc_pairs)) * 100, 2) ``` #### Automated Evaluation with Synthetic Queries For broader evaluation, synthetic queries can be generated using a language model (LLM). Each document chunk is processed to create a corresponding question, which is then used to evaluate the retrieval component. **Code for Generating Questions:** ```python from typing import List from litellm import completion from zenml import step def generate_question(chunk: str, local: bool = False) -> str: model = "ollama/mixtral" if local else "gpt-3.5-turbo" response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) return response.choices[0].message.content @step def generate_questions_from_chunks(docs_with_embeddings: List[Document], local: bool = False) -> List[Document]: for doc in docs_with_embeddings: doc.generated_questions = [generate_question(doc.page_content, local)] return docs_with_embeddings ``` After generating questions, evaluate the retrieval component by checking if the original document's URL is among the top results. **Code for Full Evaluation:** ```python @step def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) failures = sum(1 for item in dataset if item["generated_questions"][0] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2]) return round((failures / len(dataset)) * 100, 2) ``` #### Results and Improvement Areas Initial tests yielded a 20% failure rate for handcrafted queries and 16% for synthetic queries, indicating room for improvement. Potential enhancements include: - **Diverse Question Generation**: Use varied prompts for generating questions. - **Semantic Similarity Metrics**: Implement metrics like cosine similarity for nuanced performance evaluation. - **Comparative Evaluation**: Test different retrieval methods to identify strengths and weaknesses. - **Error Analysis**: Investigate failure cases to pinpoint improvement areas. This evaluation process establishes a baseline for the retrieval component's performance and guides future enhancements. The next steps will focus on evaluating the generation component of the RAG pipeline to ensure comprehensive performance assessment. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/README.md === ### Evaluation and Metrics Summary This section focuses on evaluating the performance of your Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluation is essential for understanding performance and identifying improvement areas. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to their subjective nature. Key evaluation areas include: - **Retrieval Evaluation**: Assessing the relevance of retrieved documents to the query. - **Generation Evaluation**: Evaluating the coherence and helpfulness of generated text. When evaluating your RAG pipeline, consider your specific use case and acceptable error levels. For example, in a user-facing chatbot, evaluate: - Relevance of retrieved documents. - Coherence and helpfulness of generated answers. - Presence of hate speech or toxic language. End-to-end evaluations of the RAG pipeline allow for subjective metrics, as they assess the final output. For practical guidance on evaluation timing and result interpretation, refer to the detailed sections on retrieval and generation evaluation. ### Code Example A high-level code example demonstrating the two main evaluation areas can be found in the documentation. This summary provides a concise overview of the evaluation process for RAG pipelines, emphasizing the importance of tailored metrics based on specific use cases. ================================================== === File: docs/book/user-guide/cloud-guide/cloud-guide.md === ### Cloud Guide Summary This section provides instructions for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the configuration of tools and infrastructure necessary for running pipelines. ZenML acts as a translation layer, enabling code execution across different stacks. **Key Points:** - The focus is on **registering** a stack, assuming the required resources for running pipelines are already **provisioned**. - Infrastructure provisioning can be done: - Manually - Using the **in-browser stack deployment wizard** - Using the **stack registration wizard** - Using **ZenML Terraform modules** ![ZenML is the translation layer that allows your code to run on any of your stacks](../../.gitbook/assets/vpc_zenml.png) ================================================== === File: docs/book/reference/community-and-content.md === ### Community & Content Overview ZenML provides various channels for community engagement and support: - **Slack Channel**: Join the [ZenML Slack channel](https://zenml.io/slack) for direct interaction with the core team and community discussions. It's a valuable resource for questions and project sharing. - **Social Media**: Follow us on [LinkedIn](https://www.linkedin.com/company/zenml) and [Twitter](https://twitter.com/zenml_io) for updates on releases, events, and MLOps. Engagement through comments and shares is encouraged. - **YouTube Channel**: Access our [YouTube channel](https://www.youtube.com/c/ZenML) for video tutorials and workshops that guide you through the ZenML framework. - **Public Roadmap**: Our [public roadmap](https://zenml.io/roadmap) invites community feedback on feature requests and prioritization, fostering collaboration between users and developers. - **Blog**: Visit our [Blog](https://zenml.io/blog/) for articles on tool implementation, new features, and insights from our team. - **Podcast**: Tune into our [Podcast](https://podcast.zenml.io/) for interviews and discussions on machine learning, deep learning, and MLOps with industry leaders. - **Newsletter**: Subscribe to our [Newsletter](https://zenml.io/newsletter-signup) for updates on open-source tooling and ZenML news. This documentation serves as a comprehensive guide to connecting with the ZenML community and accessing resources for learning and collaboration. ================================================== === File: docs/book/reference/how-do-i.md === # How do I...? **Last Updated**: December 13, 2023 ### Common Questions: - **Contributing to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small changes, open a pull request. For larger features, discuss on [Slack](https://zenml.io/slack/) or create an issue. - **Adding Custom Components**: Start with the [custom stack component documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For custom orchestrators, see [this page](../component-guide/orchestrators/custom.md). - **Mitigating Dependency Clashes**: Consult the [handling dependencies documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). - **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Each stack component documentation details deployment on popular cloud providers. - **Deploying on Internal Clusters**: Check the documentation on [self-hosted ZenML deployments](../getting-started/deploying-zenml/README.md). - **Hyperparameter Tuning**: Refer to our guide on [hyperparameter tuning](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). - **Resetting ZenML Client**: Use `zenml clean` to reset your client and wipe the local metadata database. This is destructive; consult us on [Slack](https://zenml.io/slack/) if unsure. - **Creating Dynamic Pipelines**: Read about composing steps and pipelines in our [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md) and check related code examples in the hyperparameter tuning guide. - **Using Project Templates**: Project templates, especially the Starter template (`starter`), help set up quickly. You can also create templates in a Git repository. - **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, refer to the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). - **Using Specific Stack Components**: Visit the [component guide](../component-guide/README.md) for tips on using each integration and component with ZenML. ================================================== === File: docs/book/reference/faq.md === ### ZenML FAQ Summary **Purpose of ZenML**: Developed to address challenges faced in deploying machine learning models in production, ZenML provides a simple, production-ready solution for large-scale ML pipelines. **ZenML vs. Orchestrators**: ZenML is not merely an orchestrator like Airflow or Kubeflow; it is a framework that enables running pipelines on various orchestrators. It integrates with multiple components of an ML system and supports standard orchestrators out-of-the-box. Users can also create custom orchestrators for more control. **Tool Integration**: For integration with specific tools, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md). Active integration examples can be found in the [integration test code](https://github.com/zenml-io/zenml/tree/main/tests/integration/examples). ZenML is extensible, and users are encouraged to integrate it with other tools as needed. **Platform Support**: - **Windows**: Officially supported via WSL; limited functionality outside WSL. - **Macs with Apple Silicon**: Supported with the following environment variable: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is necessary for local server use but not required for CLI usage with a deployed server. **Custom Tool Integration**: Guidance for extending ZenML with custom tools is available [here](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). **Contributing**: Community contributions are welcome. Start with issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). **Community Engagement**: Join the [Slack group](https://zenml.io/slack/) for discussions and support. **License**: ZenML is licensed under the Apache License Version 2.0. The complete license is available in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions will also be licensed under this agreement. ================================================== === File: docs/book/reference/environment-variables.md === # Environment Variables for ZenML ZenML allows control over its behavior through several pre-defined environment variables. Below are the key variables, their default values, and options: ## Logging Verbosity Set the logging level: ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` ## Logging Format Define the logging output format: ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` ## Disable Step Logs Storage Control whether step logs are stored: ```bash export ZENML_DISABLE_STEP_LOGS_STORAGE=false # Set to true to disable storage ``` ## ZenML Repository Path Specify the path for ZenML's repository: ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` ## Analytics Opt-Out Opt out of analytics tracking: ```bash export ZENML_ANALYTICS_OPT_IN=false ``` ## Debug Mode Enable developer mode: ```bash export ZENML_DEBUG=true ``` ## Active Stack Set the active stack by its UUID: ```bash export ZENML_ACTIVE_STACK_ID= ``` ## Prevent Pipeline Execution Prevent execution of pipelines: ```bash export ZENML_PREVENT_PIPELINE_EXECUTION=false # Set to true to prevent execution ``` ## Disable Rich Traceback Disable rich traceback feature: ```bash export ZENML_ENABLE_RICH_TRACEBACK=true # Set to false to disable ``` ## Disable Colorful Logging Disable colorful logging: ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` To disable locally but keep enabled on remote orchestrators, set it in the orchestrator's environment. ## Disable Stack Validation Skip stack validation: ```bash export ZENML_SKIP_STACK_VALIDATION=true ``` ## Ignore Untracked Code Repository Files Ignore untracked files in code repositories: ```bash export ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=true ``` ## ZenML Global Config Path Set the path for the global config file: ```bash export ZENML_CONFIG_PATH=/path/to/somewhere ``` ## Server Configuration Refer to the ZenML Server documentation for server configuration options. ## Client Configuration Connect the ZenML Client to a server by setting: ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` This configuration is useful for CI/CD environments and containerized setups. ================================================== === File: docs/book/reference/api-reference.md === # ZenML API Reference Summary ## Overview The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local usage, access is available at `http://127.0.0.1:8237/docs` when logged in locally. ## Authentication Methods ### Using a Short-Lived API Token 1. Generate a short-lived API token (valid for 1 hour) from the ZenML dashboard. 2. Use the token as a bearer token in HTTP requests. Examples: **Curl:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` **Wget:** ```bash wget -qO- --header="Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` **Python:** ```python import requests response = requests.get( "https://your-zenml-server/api/v1/current-user", headers={"Authorization": f"Bearer YOUR_API_TOKEN"} ) print(response.json()) ``` **Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. - Tokens are user-scoped and inherit permissions. - For long-term access, consider using a service account and API key. ### Using a Service Account and API Key 1. Create a service account: ```shell zenml service-account create myserviceaccount ``` This will output the ``. 2. Obtain an API token using the API key with a POST request to `/api/v1/login`. Examples: **Curl:** ```bash curl -X POST -d "password=" https://your-zenml-server/api/v1/login ``` **Wget:** ```bash wget -qO- --post-data="password=" --header="Content-Type: application/x-www-form-urlencoded" https://your-zenml-server/api/v1/login ``` **Python:** ```python import requests response = requests.post( "https://your-zenml-server/api/v1/login", data={"password": ""}, headers={"Content-Type": "application/x-www-form-urlencoded"} ) print(response.json()) ``` 3. Use the obtained API token for authenticated requests (same as above examples). **Important Notes:** - Tokens are scoped to the service account and inherit permissions. - Tokens expire after a configured duration (typically 1 hour). - Handle API tokens securely; rotate compromised keys via the ZenML dashboard or command line. This summary encapsulates the key technical details and usage examples for accessing the ZenML API programmatically. ================================================== === File: docs/book/reference/python-client.md === ### ZenML Python Client Overview The ZenML Python `Client` enables interaction with resources like pipelines, runs, and stacks stored in a ZenML instance. Resources can be fetched, updated, or created programmatically. #### REST API Alternative For other programming languages, ZenML resources can be accessed via REST API endpoints. Refer to the server's `/docs/` page for available endpoints. ### Usage Example To fetch the last 10 pipeline runs for the current stack: ```python from zenml.client import Client client = Client() my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) for pipeline_run in my_runs_on_current_stack: print(pipeline_run.name) ``` ### Main ZenML Resources - **Pipelines**: Tracked pipelines. - **Pipeline Runs**: Details of executed runs. - **Run Templates**: Templates for running pipelines. - **Step Runs**: Steps of pipeline runs. - **Artifacts**: Artifacts generated during runs. - **Schedules**: Metadata for scheduled runs. - **Builds**: Docker images for pipelines. - **Code Repositories**: Connected git repositories. #### Additional Resources - **Stacks**: Registered stacks in ZenML. - **Stack Components**: Components like orchestrators and artifact stores. - **Flavors**: Variants of stack components. - **User**: Registered users (default user in local runs). - **Secrets**: Authentication secrets in the ZenML Secret Store. - **Service Connectors**: Connectors for infrastructure integration. ### Client Methods #### Reading and Writing Resources **List Methods**: Retrieve resources with options for filtering and pagination. ```python client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) ``` **Get Methods**: Fetch specific resources by ID, name, or prefix. ```python client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # By ID client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # By Name ``` **Create, Update, Delete Methods**: Available for certain resources; check the Client SDK documentation for specifics. #### Active User and Stack Access information about the authenticated user and the active stack: ```python my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, ) ``` ### Resource Models ZenML Client methods return **Response Models** (Pydantic Models) ensuring data validation. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. **Request, Update, and Filter Models** are used for server API endpoints, not Client methods. For detailed model fields, refer to the [ZenML Models SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-models/#zenml.models). ### Important Notes - Response Models are not related to machine learning models. - All resources share common fields defined in **Base Models**. This summary provides a concise overview of the ZenML Python Client, its usage, resources, methods, and model structures while retaining critical technical details. ================================================== === File: docs/book/reference/global-settings.md === ### ZenML Global Settings Overview The **ZenML Global Config Directory** stores global settings for ZenML installations, typically located at: - **Linux**: `~/.config/zenml` - **Mac**: `~/Library/Application Support/zenml` - **Windows**: `C:\Users\%USERNAME%\AppData\Local\zenml` You can override the default path using the `ZENML_CONFIG_PATH` environment variable. To check the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` **Warning**: Avoid manually altering files in the global config directory. Use CLI commands for management: - `zenml analytics` - Manage analytics settings - `zenml clean` - Reset configuration to default - `zenml downgrade` - Downgrade ZenML version in global config ### Initialization When ZenML is first run, it creates the global config directory and initializes default settings, including a default Stack. Example output: ``` Initializing the ZenML global configuration version to 0.13.2 Creating default user 'default' ... Creating default stack for user 'default'... ``` The directory structure post-initialization: ``` /home/stefan/.config/zenml ├── config.yaml # Global Configuration Settings └── local_stores # Local data storage for stack components ├── # Local Store subdirectory └── default_zen_store └── zenml.db # SQLite database for ZenML data ``` ### Configuration Details 1. **config.yaml**: Contains global settings like client ID, database config, and active Stack. Example contents: ```yaml active_stack_id: ... analytics_opt_in: true store: database: ... url: ... username: ... user_id: d980f13e-05d1-4765-92d2-1dc7eb7addb7 version: 0.13.2 ``` 2. **local_stores**: Subdirectories for local stack components, storing artifacts and data. 3. **zenml.db**: Default SQLite database for storing stack-related information. ### Usage Analytics ZenML collects anonymized usage statistics to improve the tool. You can opt out with: ```bash zenml analytics opt-out ``` ### Version Mismatch (Downgrading) If you downgrade ZenML, you may see an error indicating a version mismatch: ```shell `The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` ``` To downgrade the global configuration, use: ```shell zenml downgrade ``` **Warning**: Downgrading may lead to unexpected behavior. To reset the configuration, run: ```shell zenml clean ``` This documentation provides essential details about managing ZenML global settings, initialization, configuration structure, analytics, and handling version mismatches. ================================================== === File: docs/book/component-guide/integration-overview.md === ### Overview of ZenML Integrations ZenML enhances MLOps workflows by integrating with various tools across the MLOps stack. This allows users to orchestrate ML pipelines using tools like **Airflow** or **Kubeflow**, track experiments with **MLflow** or **Weights & Biases**, and deploy models on Kubernetes using **Seldon Core**. ZenML's design avoids vendor lock-in, enabling easy transitions between tools as requirements evolve. ### Available Integrations A comprehensive list of ZenML integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) and in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations) on GitHub. ### Installing ZenML Integrations To install integrations, use the command: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs the preferred versions of the specified integrations via pip. The `-y` flag automatically confirms the installation prompts. ### Using `uv` for Package Installation For package management, you can use [`uv`](https://github.com/astral-sh/uv) by adding the `--uv` flag to the installation command. Ensure `uv` is installed, as this is an experimental feature. ### Upgrading ZenML Integrations To upgrade integrations, run: ```bash zenml integration upgrade mlflow pytorch -y ``` The `-y` flag confirms the upgrade without prompts. Omitting integration names upgrades all installed integrations. ### Community Contributions ZenML encourages community contributions for new integrations. Refer to the [roadmap](https://zenml.io/roadmap) for prioritized tools and check the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for details on how to contribute. ================================================== === File: docs/book/component-guide/component-guide.md === # Overview of MLOps Components in ZenML MLOps can be overwhelming due to the multitude of tools available. ZenML categorizes these tools into **Stacks and Stack Components** to clarify their roles in MLOps pipelines. Stack components are base abstractions that standardize workflows, allowing users to implement custom components or utilize built-in integrations. ## Supported Stack Components | **Type** | **Description** | |------------------------|----------------------------------------------------------| | [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline run orchestration. | | [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts generated by pipelines. | | [Container Registry](./container-registries/container-registries.md) | Repository for container images. | | [Step Operator](./step-operators/step-operators.md) | Executes individual steps in specific environments. | | [Model Deployer](./model-deployers/model-deployers.md) | Platforms for online model serving. | | [Feature Store](./feature-stores/feature-stores.md) | Manages data and features. | | [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks machine learning experiments. | | [Alerter](./alerters/alerters.md) | Sends alerts through designated channels. | | [Annotator](./annotators/annotators.md) | Labels and annotates data. | | [Data Validator](./data-validators/data-validators.md) | Validates data and models. | | [Image Builder](./image-builders/image-builders.md) | Builds container images. | | [Model Registry](./model-registries/model-registries.md) | Manages and interacts with ML models. | Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional based on the pipeline's maturity. ## Custom Component Flavors Users can create custom component flavors to modify ZenML's behavior. For more details, refer to the [guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types, such as the [custom orchestrator guide](orchestrators/custom.md). ================================================== === File: docs/book/component-guide/README.md === # Overview of ZenML MLOps Components and Integrations ZenML categorizes MLOps tools into stack components, which standardize workflows in your MLOps pipeline. Each stack component serves a specific function, and the following are the currently supported components: | **Type of Stack Component** | **Description** | |------------------------------|-----------------| | [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | | [Container Registry](container-registries/container-registries.md) | Stores container images | | [Data Validator](data-validators/data-validators.md) | Validates data and models | | [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Model Deployer](model-deployers/model-deployers.md) | Online model serving platforms | | [Step Operator](step-operators/step-operators.md) | Executes pipeline steps in specialized environments | | [Alerter](alerters/alerters.md) | Sends alerts via specified channels | | [Image Builder](image-builders/image-builders.md) | Builds container images | | [Annotator](annotators/annotators.md) | Labels and annotates data | | [Model Registry](model-registries/model-registries.md) | Manages ML models | | [Feature Store](feature-stores/feature-stores.md) | Manages data/features | Each ZenML pipeline requires at least an orchestrator and an artifact store, while other components can be added as needed. ## Custom Component Flavors Users can create custom components by writing their own component flavors. For guidance, refer to the [general guide](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides like the [custom orchestrator guide](orchestrators/custom.md). ## Integrations ZenML integrates with various tools to enhance MLOps processes. Examples include using [Airflow](orchestrators/airflow.md) for orchestration, [MLflow Tracking](experiment-trackers/mlflow.md) for experiment tracking, and deploying models with [Seldon Core](model-deployers/seldon.md). This flexibility allows users to avoid vendor lock-in and adapt their stack as requirements change. ### Available Integrations A comprehensive list of ZenML integrations can be found on the [integrations webpage](https://zenml.io/integrations) or in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ### Installing Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs preferred versions via pip. The `-y` flag confirms installations without prompts. ### Upgrade Integrations To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` The `-y` flag confirms upgrades without prompts, and if no integrations are specified, all installed integrations will be upgraded. ### Using `uv` for Package Installation You can utilize [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag to the installation command. Ensure `uv` is installed, as this is an experimental feature. ### Community Contributions ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and the [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more details. ================================================== === File: docs/book/component-guide/data-validators/evidently.md === ### Summary of Evidently Data Validator Documentation **Overview:** The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library to monitor data quality, data drift, model drift, and model performance. It generates reports and performs checks that can trigger automated corrective actions or provide visual interpretations. **Use Cases:** Evidently is useful for: - **Data Quality Reports:** Analyze feature statistics and compare datasets (e.g., training vs. testing). - **Data Drift Reports:** Detect changes in feature distributions between datasets. - **Target Drift Reports:** Identify changes in target functions or model predictions. - **Performance Reports:** Evaluate model performance using metrics for regression or classification tasks. **Deployment:** To deploy the Evidently Data Validator, install the integration: ```shell zenml integration install evidently -y ``` Register the data validator: ```shell zenml data-validator register evidently_data_validator --flavor=evidently zenml stack register custom_stack -dv evidently_data_validator ... --set ``` **Usage:** Evidently profiling functions accept `pandas.DataFrame` datasets and generate `Report` objects. Key steps include: 1. **Data Profiling:** Generate reports using the standard Evidently report step or custom implementations. 2. **Data Validation:** Run automated validation tests using the standard Evidently test step or custom implementations. **Example of Evidently Report Step:** ```python from zenml.integrations.evidently.steps import evidently_report_step text_data_report = evidently_report_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping(target="Rating", ...), metrics=[EvidentlyMetricConfig.metric("DataQualityPreset"), ...], download_nltk_data=True, ), ) ``` **Example of Data Validation Step:** ```python from zenml.integrations.evidently.steps import evidently_test_step text_data_test = evidently_test_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping(target="Rating", ...), tests=[EvidentlyTestConfig.test("DataQualityTestPreset"), ...], download_nltk_data=True, ), ) ``` **Direct Use of Evidently:** You can also directly use Evidently in custom pipeline steps: ```python from evidently.report import Report @step def data_profiler(dataset: pd.DataFrame): report = Report(metrics=[metric_preset.DataQualityPreset()]) report.run(current_data=dataset, reference_data=dataset) return report.json(), HTMLString(report.show(mode="inline").data) ``` **Visualization:** Evidently reports can be visualized in the ZenML dashboard or Jupyter notebooks using the `visualize()` method: ```python def visualize_results(pipeline_name: str, step_name: str): pipeline = Client().get_pipeline(pipeline=pipeline_name) evidently_step = pipeline.last_run.steps[step_name] evidently_step.visualize() ``` This documentation provides a comprehensive guide on utilizing the Evidently Data Validator for effective data and model monitoring within ZenML pipelines. ================================================== === File: docs/book/component-guide/data-validators/deepchecks.md === ### Summary of Deepchecks Integration with ZenML **Overview**: Deepchecks is an open-source library for data and model validation, integrated with ZenML to facilitate data integrity, data drift, model drift, and model performance testing within pipelines. It supports both tabular data (`pandas.DataFrame`) and computer vision data (`torch.utils.data.dataloader.DataLoader`). **Key Features**: - **Data Integrity Checks**: Identify issues like missing values and conflicting labels. - **Data Drift Checks**: Detect skew and drift by comparing target and reference datasets. - **Model Performance Checks**: Evaluate model performance using metrics like confusion matrix. - **Multi-Model Performance Reports**: Summarize performance scores across multiple models. **Installation**: To use Deepchecks with ZenML, install the integration: ```shell zenml integration install deepchecks -y ``` **Registering the Data Validator**: ```shell zenml data-validator register deepchecks_data_validator --flavor=deepchecks zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` **Usage**: Deepchecks validation checks are categorized based on input requirements: 1. **Data Integrity Checks**: Single dataset input. 2. **Data Drift Checks**: Two datasets (target and reference). 3. **Model Validation Checks**: Single dataset and model input. 4. **Model Drift Checks**: Two datasets and a model input. **Standard Steps**: - `deepchecks_data_integrity_check_step`: For data integrity tests. - `deepchecks_data_drift_check_step`: For data drift tests. - `deepchecks_model_validation_check_step`: For model performance tests. - `deepchecks_model_drift_check_step`: For model drift tests. **Example of Data Integrity Check**: ```python from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step @pipeline def data_validation_pipeline(): df_train, df_test = data_loader() deepchecks_data_integrity_check_step(dataset=df_train) data_validation_pipeline() ``` **Customizing Checks**: You can specify a custom list of checks and additional parameters: ```python deepchecks_data_integrity_check_step( check_list=[ DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES, DeepchecksDataIntegrityCheck.TABULAR_DATA_DUPLICATES, ], dataset=... ) ``` **Docker Configuration for Remote Orchestrators**: For remote orchestrators, extend the Docker image to include necessary binaries: ```shell ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base RUN apt-get update RUN apt-get install ffmpeg libsm6 libxext6 -y ``` **Visualizing Results**: Results can be visualized in the ZenML dashboard or Jupyter notebooks: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline=pipeline_name) last_run = pipeline.last_run step = last_run.steps[step_name] step.visualize() ``` This integration allows for efficient testing and validation of data and models within ZenML pipelines, ensuring data quality and model reliability. For detailed guidance, refer to the official Deepchecks documentation. ================================================== === File: docs/book/component-guide/data-validators/data-validators.md === # Data Validators Overview Data Validators are essential tools in machine learning (ML) for ensuring data quality and monitoring model performance throughout the ML project lifecycle. They help in data profiling, integrity testing, and detecting data and model drift at various stages, including data ingestion, model training, evaluation, and inference. ## Key Points - **Importance**: Good data is crucial; poor data leads to unreliable model outcomes. Data Validators help maintain data quality and model performance. - **Functionality**: They generate data profiles and quality check reports, which are versioned and stored in the Artifact Store for later retrieval and visualization. - **Use Cases**: - Log data quality and model performance at different development stages. - Perform regular integrity checks on newly ingested data. - Compare new training data and model performance against references in continuous training pipelines. - Analyze data drift and detect discrepancies in batch and online inference pipelines. ## Data Validator Flavors Data Validators are optional components in ZenML, with various integrations available: | Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | |----------------|----------|-------------|-------------|-------|--------------------| | [Deepchecks](deepchecks.md) | Data quality, drift, performance | `pandas.DataFrame`, `torch.utils.data.dataloader.DataLoader` | `sklearn.base.ClassifierMixin`, `torch.nn.Module` | Adds validation tests to pipelines | `deepchecks` | | [Evidently](evidently.md) | Data quality, drift, performance | `pandas.DataFrame` | N/A | Generates reports and visualizations | `evidently` | | [Great Expectations](great_expectations.md) | Profiling, quality | `pandas.DataFrame` | N/A | Data testing and documentation | `great_expectations` | | [Whylogs/WhyLabs](whylogs.md) | Data drift | `pandas.DataFrame` | N/A | Generates data profiles | `whylogs` | To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` ## How to Use Data Validators 1. **Configuration**: Add a Data Validator to your ZenML stack. 2. **Integration**: Utilize built-in validation steps in your pipelines or use libraries directly in custom steps, returning results as artifacts. 3. **Artifact Access**: Access validation artifacts in subsequent steps or fetch them later for processing or visualization. For detailed usage, refer to the documentation for the specific Data Validator flavor in your stack. ================================================== === File: docs/book/component-guide/data-validators/whylogs.md === ### Summary of Whylogs/WhyLabs Profiling with ZenML Integration **Overview**: The whylogs/WhyLabs Data Validator in ZenML utilizes the whylogs library to create and manage data profiles, which are statistical summaries of data used for validation and monitoring in data pipelines. **Use Cases**: - **Data Quality**: Validate input data quality. - **Data Drift**: Detect changes in input feature distributions. - **Model Drift**: Identify discrepancies between training and serving data, concept drift, and performance degradation. **Deployment**: 1. **Install Integration**: ```shell zenml integration install whylogs -y ``` 2. **Register Data Validator**: - Without WhyLabs: ```shell zenml data-validator register whylogs_data_validator --flavor=whylogs zenml stack register custom_stack -dv whylogs_data_validator ... --set ``` - With WhyLabs (requires secret creation): ```shell zenml secret create whylabs_secret \ --whylabs_default_org_id= \ --whylabs_api_key= zenml data-validator register whylogs_data_validator --flavor=whylogs \ --authentication_secret=whylabs_secret ``` 3. **Enable Logging**: Set `upload_to_whylabs=True` in custom steps to upload profiles. **Usage**: - **Standard Step**: ```python from zenml.integrations.whylogs.steps import get_whylogs_profiler_step train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") ``` - **Custom Step**: ```python from zenml import step import pandas as pd import whylogs as why from whylogs.core import DatasetProfileView @step def data_loader() -> Tuple[pd.DataFrame, DatasetProfileView]: X, y = datasets.load_diabetes(return_X_y=True, as_frame=True) df = pd.merge(X, y, left_index=True, right_index=True) profile = why.log(pandas=df).profile().view() return df, profile ``` **Visualization**: - Profiles can be visualized in the ZenML dashboard or Jupyter notebooks using: ```python from zenml.client import Client def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None): pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") whylogs_step = pipe.last_run.steps[step_name] whylogs_step.visualize() ``` **Key Points**: - The integration currently supports only tabular data in `pandas.DataFrame` format. - For detailed profiling, users can choose between standard steps, custom implementations, or direct use of the whylogs library. - Consult the official whylogs documentation for advanced features and configurations. ================================================== === File: docs/book/component-guide/data-validators/great-expectations.md === ### Great Expectations with ZenML **Overview**: Great Expectations integrates with ZenML to perform data quality checks and profiling on data within pipelines. It automates corrective actions and generates documentation for results. #### Use Cases - **Data Profiling**: Automatically generates validation rules (Expectations) from dataset properties. - **Data Quality**: Validates datasets against predefined or inferred Expectations. - **Data Docs**: Maintains human-readable documentation of validation rules and results. ZenML currently supports `pandas.DataFrame` for data validation. #### Deployment To install the Great Expectations integration: ```shell zenml integration install great_expectations -y ``` **Configuration Options**: 1. **Let ZenML Manage Configuration**: Automatically uses ZenML's Artifact Store for storing Expectations and results. 2. **Use Existing Configuration**: Point to a local `great_expectations.yaml` file. 3. **Migrate Configuration**: Load existing configurations into ZenML for remote orchestrators. **Warning**: Some CLI commands may not work with ZenML-managed configurations. **Example Commands**: - Registering a data validator: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations zenml stack register custom_stack -dv ge_data_validator ... --set ``` - Using existing configuration: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations ``` - Migrating configuration: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml ``` #### Advanced Configuration - **`configure_zenml_stores`**: Automatically updates Great Expectations to use ZenML's Artifact Store. - **`configure_local_docs`**: Sets up a local Data Docs site for visualization. #### Usage in Pipelines **Core Concepts**: - **Expectations / Expectation Suites**: Define validation rules. - **Validations**: Check datasets against Expectations. - **Data Docs**: Visual documentation of results. **Data Profiler Step**: Automatically generates an Expectation Suite from a `pandas.DataFrame`: ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step ge_profiler_step = great_expectations_profiler_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` **Pipeline Example**: ```python @pipeline def profiling_pipeline(): dataset, _ = importer() ge_profiler_step(dataset) profiling_pipeline() ``` **Data Validator Step**: Validates a dataset using an existing Expectation Suite: ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step ge_validator_step = great_expectations_validator_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` **Pipeline Example**: ```python @pipeline def validation_pipeline(): dataset, condition = importer() results = ge_validator_step(dataset, condition) message = checker(results) validation_pipeline() ``` #### Direct Use of Great Expectations You can directly use Great Expectations within custom steps by accessing the ZenML-managed Data Context: ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @step def create_custom_expectation_suite() -> ExpectationSuite: context = GreatExpectationsDataValidator.get_data_context() suite = context.create_expectation_suite("custom_suite") # Add expectations and save context.save_expectation_suite(suite) context.build_data_docs() return suite ``` #### Visualizing Results Results can be visualized in the ZenML dashboard or within Jupyter notebooks using: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline_name) last_run = pipeline.last_run validation_step = last_run.steps[step_name] validation_step.visualize() visualize_results("validation_pipeline", "profiler") ``` This summary encapsulates the key aspects of using Great Expectations with ZenML, including installation, configuration, usage in pipelines, and visualization of results. ================================================== === File: docs/book/component-guide/data-validators/custom.md === ### Developing a Custom Data Validator in ZenML #### Overview Before creating a custom data validator, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. #### Important Notes - **Base Abstraction in Progress**: The base abstraction for Data Validators is under development; avoid extending them for now. You can use existing flavors or implement your own, but be prepared for potential refactoring later. #### Steps to Build a Custom Data Validator 1. **Create a Class**: Inherit from the `BaseDataValidator` class and override necessary abstract methods based on your chosen library/service. 2. **Configuration Class**: If configuration is needed, inherit from `BaseDataValidatorConfig`. 3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to integrate both classes. 4. **Pipeline Integration** (Optional): Provide standard steps for easy integration into pipelines. #### Registration Register your custom data validator flavor using the CLI with dot notation: ```shell zenml data-validator flavor register ``` For example, if `MyDataValidatorFlavor` is in `flavors/my_flavor.py`: ```shell zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` #### Best Practices - Initialize ZenML at the root of your repository to ensure proper resolution of the flavor class. - List available flavors using: ```shell zenml data-validator flavor list ``` #### Class Interactions - **CustomDataValidatorFlavor**: Used during flavor creation via CLI. - **CustomDataValidatorConfig**: Validates user input during stack component registration. - **CustomDataValidator**: Engaged when the component is in use, allowing separation of flavor configuration from implementation. This structure facilitates the registration of flavors and components independently of their implementation dependencies. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === ### Summary of SageMaker Step Operator Documentation **Overview**: Amazon SageMaker provides specialized compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator allows submission of individual steps to SageMaker compute instances. **When to Use**: Utilize the SageMaker step operator if: - Your pipeline steps require resources (CPU, GPU, memory) not available in your orchestrator. - You have access to SageMaker. **Deployment Requirements**: 1. Create an IAM role in the AWS console with `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. 2. Install ZenML AWS integration: ```shell zenml integration install aws ``` 3. Install and run Docker. 4. Set up an AWS container registry and a remote artifact store. 5. Choose an instance type for execution (refer to [AWS instance types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html)). 6. (Optional) Create an experiment for grouping SageMaker runs. **Authentication Methods**: 1. **Service Connector** (recommended): - Register a service connector and connect it to the step operator: ```shell zenml service-connector register --type aws -i zenml step-operator register --flavor=sagemaker --role= --instance_type= zenml step-operator connect --connector zenml stack register -s ... --set ``` 2. **Implicit Authentication**: - For local orchestrators, ZenML uses the `default` profile in the AWS configuration file. - For remote orchestrators, ensure the environment can authenticate to AWS and assume the specified IAM role: ```shell zenml step-operator register --flavor=sagemaker --role= --instance_type= zenml stack register -s ... --set python run.py # Authenticates with `default` profile ``` **Using the Step Operator**: To execute steps in SageMaker, specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image for your code to run in SageMaker. **Additional Configuration**: - Use `SagemakerStepOperatorSettings` for additional configurations. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for settings. **CUDA for GPU**: For GPU execution, follow the instructions to enable CUDA for full acceleration. This summary captures the essential technical details and instructions for using the SageMaker step operator with ZenML. ================================================== === File: docs/book/component-guide/step-operators/kubernetes.md === ### Kubernetes Step Operator in ZenML The Kubernetes Step Operator in ZenML allows for executing individual pipeline steps on Kubernetes pods. #### When to Use - Use it when pipeline steps require additional computing resources (CPU, GPU, memory) not provided by the orchestrator. - Access to a Kubernetes cluster is required. #### Deployment Requirements 1. **Kubernetes Cluster**: Must be deployed (refer to the cloud guide). 2. **ZenML Kubernetes Integration**: Install with: ```shell zenml integration install kubernetes ``` 3. **Docker**: Installed and running or a remote image builder configured. 4. **Remote Artifact Store**: Required for artifact read/write access. **Recommendation**: Set up a Service Connector for connecting the Kubernetes step operator to the cluster, especially for cloud-managed clusters. #### Registration and Connection You can register the step operator in two ways: 1. **Using a Service Connector**: ```shell zenml step-operator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml step-operator connect --connector ``` 2. **Using `kubectl` Client**: ```shell zenml step-operator register --flavor=kubernetes --kubernetes_context= ``` Update the active stack: ```shell zenml stack update -s ``` #### Executing Steps Specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds Docker images for running steps in Kubernetes. #### Interacting with Pods For debugging, you can interact with pods using `kubectl`. Pods are labeled for easy identification: - `run`: ZenML run name - `pipeline`: ZenML pipeline name Example command to delete pods for a specific pipeline: ```shell kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline ``` #### Additional Configuration Use `KubernetesStepOperatorSettings` for advanced configurations: - **Pod Settings**: Node selectors, labels, affinity, tolerations, image pull secrets. - **Service Account**: Specify the service account for the pods. Example configuration: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings kubernetes_settings = KubernetesStepOperatorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, "limits": {"cpu": "4", "memory": "8Gi"} }, "labels": {"app": "ml-pipeline"} }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) @step(settings={"step_operator": kubernetes_settings}) def my_kubernetes_step(): ... ``` #### GPU Support To run steps on GPU, follow specific instructions to enable CUDA and customize settings accordingly. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/) for available attributes and configurations. ================================================== === File: docs/book/component-guide/step-operators/modal.md === ### Modal Step Operator Overview **Modal** is a cloud infrastructure platform optimized for fast execution, particularly for Docker image building and hardware provisioning. The **ZenML Modal step operator** allows users to submit individual steps to run on Modal compute instances. #### When to Use Utilize the Modal step operator if: - Fast execution is required for resource-intensive steps (CPU, GPU, memory). - Specific hardware requirements (e.g., GPU type, CPU count) need to be defined. - You have access to Modal. #### Deployment Steps 1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). 2. **Install CLI**: Run: ```shell pip install modal modal setup ``` #### Usage Requirements - **ZenML Modal Integration**: Install it with: ```shell zenml integration install modal ``` - **Docker**: Must be installed and running. - **Cloud Artifact Store**: Required for reading/writing artifacts. - **Cloud Container Registry**: Required for storing container images. #### Registering the Step Operator Register and update your stack with: ```shell zenml step-operator register --flavor=modal zenml stack update -s ... ``` #### Executing Steps To execute a pipeline step in Modal, use the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML will build a Docker image for your code. #### Additional Configuration Specify hardware requirements using `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml.integrations.modal.flavors import ModalStepOperatorSettings modal_settings = ModalStepOperatorSettings(gpu="A100") resource_settings = ResourceSettings(cpu=2, memory="32GB") @step( step_operator="modal", settings={ "step_operator": modal_settings, "resources": resource_settings } ) def my_modal_step(): ... ``` - **CPU Parameter**: Accepts a single integer value as a soft minimum limit. - **Cost Example**: For 2 CPUs and 32GB memory, the minimum cost is approximately $1.03/hour. This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu). #### Notes - Settings for region and cloud provider are available for Modal Enterprise and Team plan customers. - Use looser settings to avoid execution failures; Modal provides detailed error messages for troubleshooting. - For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/spark-kubernetes.md === ### Spark Integration Overview The `spark` integration provides two key step operators: 1. **SparkStepOperator**: Base class for Spark-related step operators. 2. **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications on Kubernetes. ### SparkStepOperator Configuration The configuration for `SparkStepOperator` includes: ```python class SparkStepOperatorConfig(BaseStepOperatorConfig): master: str # Master URL for the Spark cluster (supports Kubernetes, YARN, Mesos) deploy_mode: str = "cluster" # 'cluster' (default) or 'client' submit_kwargs: Optional[Dict[str, Any]] = None # Additional Spark parameters ``` ### Key Methods in SparkStepOperator - **_resource_configuration**: Configures Spark resources. - **_backend_configuration**: Configures cluster manager specifics. - **_io_configuration**: Configures input/output sources. - **_additional_configuration**: Appends user-defined parameters. - **_launch_spark_job**: Executes a Spark job using `spark-submit`. ### KubernetesSparkStepOperator This operator extends `SparkStepOperator` and includes: ```python class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): namespace: Optional[str] = None # Kubernetes namespace service_account: Optional[str] = None # Service account for Spark components ``` The `_backend_configuration` method is tailored for Kubernetes, building and pushing Docker images. ### Usage Guidelines **When to Use**: - For large datasets. - When distributed computing can enhance performance. **Deployment Requirements**: - **Remote ZenML server**: Follow the deployment guide. - **Kubernetes cluster**: Can be deployed via various cloud providers. ### Spark EKS Setup Guide 1. Create an Amazon EKS cluster role and node role. 2. Attach `AmazonRDSFullAccess` and `AmazonS3FullAccess` policies. 3. Create the EKS cluster and note the cluster name and API server endpoint. **Docker Image Setup**: - Use Spark’s Docker images or build your own with the `docker-image-tool`. **RBAC Configuration**: Create a `rbac.yaml` file for Kubernetes access: ```yaml apiVersion: v1 kind: Namespace metadata: name: spark-namespace --- apiVersion: v1 kind: ServiceAccount metadata: name: spark-service-account namespace: spark-namespace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: spark-role namespace: spark-namespace subjects: - kind: ServiceAccount name: spark-service-account namespace: spark-namespace roleRef: kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io ``` Execute the following commands to apply RBAC: ```bash aws eks --region=$REGION update-kubeconfig --name=$EKS_CLUSTER_NAME kubectl create -f rbac.yaml ``` ### Registering and Using the Step Operator To use the `KubernetesSparkStepOperator`: 1. Install the ZenML Spark integration: ```bash zenml integration install spark ``` 2. Register the step operator: ```bash zenml step-operator register spark_step_operator \ --flavor=spark-kubernetes \ --master=k8s://$EKS_API_SERVER_ENDPOINT \ --namespace= \ --service_account= ``` 3. Register the stack: ```bash zenml stack register spark_stack \ -o default \ -s spark_step_operator \ -a spark_artifact_store \ -c spark_container_registry \ -i local_builder \ --set ``` 4. Define a step using the operator: ```python @step(step_operator=) def step_on_spark(...) -> ...: ... ``` ### Additional Configuration For more configuration options, refer to `SparkStepOperatorSettings` and the ZenML SDK documentation. ================================================== === File: docs/book/component-guide/step-operators/azureml.md === ### AzureML Step Operator in ZenML **Overview**: AzureML provides compute instances for training jobs and a UI for model management. The ZenML AzureML step operator allows submission of pipeline steps to AzureML compute instances. #### When to Use - If pipeline steps require additional computing resources (CPU, GPU, memory) not available in your orchestrator. - If you have access to AzureML. #### Deployment Steps 1. **Create AzureML Workspace**: Set up a workspace including an Azure container registry and storage account. 2. **(Optional) Create Compute Instance/Cluster**: Use Azure Machine Learning Studio to create a compute instance or cluster. If not created, the step operator will use a serverless compute target or provision a new one. 3. **(Optional) Create Service Principal**: For authentication if using a service connector. #### Prerequisites - Install ZenML Azure integration: ```shell zenml integration install azure ``` - Ensure Docker is installed and running. - Set up an Azure container registry and artifact store. #### Authentication Methods 1. **Service Connector** (Recommended): - Register a service connector with permissions to manage AzureML jobs. ```shell zenml service-connector register --type azure -i zenml step-operator register \ --flavor=azureml \ --subscription_id= \ --resource_group= \ --workspace_name= zenml step-operator connect --connector zenml stack register -s ... --set ``` 2. **Implicit Authentication**: - For local orchestrators, ZenML uses Azure CLI configuration. - For remote orchestrators, ensure they can authenticate to Azure. ```shell zenml step-operator register \ --flavor=azureml \ --subscription_id= \ --resource_group= \ --workspace_name= zenml stack register -s ... --set ``` #### Using the Step Operator To execute a step in AzureML, specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image `/zenml:` for running steps. #### Additional Configuration Use `AzureMLStepOperatorSettings` to configure compute resources: 1. **Serverless Compute** (default): ```python azureml_settings = AzureMLStepOperatorSettings(mode="serverless") ``` 2. **Compute Instance**: ```python azureml_settings = AzureMLStepOperatorSettings( mode="compute-instance", compute_name="MyComputeInstance", compute_size="Standard_NC6s_v3", ) ``` 3. **Compute Cluster**: ```python azureml_settings = AzureMLStepOperatorSettings( mode="compute-cluster", compute_name="MyComputeCluster", ) ``` #### GPU Support To enable CUDA for GPU usage, follow specific instructions to ensure proper configuration. For more details, refer to the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.flavors.azureml_step_operator_flavor.AzureMLStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/step-operators.md === # Step Operators The step operator allows executing individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like [Spark](https://spark.apache.org/). ### Comparison to Orchestrators - **Orchestrator**: A mandatory component that executes all pipeline steps in order and provides scheduling features. - **Step Operator**: Used for executing individual steps in separate environments when the orchestrator's environment lacks necessary resources. ### When to Use Use a step operator when pipeline steps require resources not available in the orchestrator's runtime. For example, training a computer vision model that needs a GPU on a Kubernetes cluster without GPU nodes would necessitate a step operator like [SageMaker](sagemaker.md), [Vertex](vertex.md), or [AzureML](azureml.md). ### Step Operator Flavors ZenML provides the following step operators for executing steps on major cloud providers: | Step Operator | Flavor | Integration | Notes | |---------------|-------------|-------------|------------------------------------| | [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | | [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | | [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | | [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | | [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps in a distributed manner using Spark | | [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | | [Custom Implementation](custom.md) | _custom_ | | Extend the step operator abstraction | To view available flavors, run: ```shell zenml step-operator flavor list ``` ### How to Use You don't need to interact directly with ZenML step operators in your code. Simply specify the desired step operator in the `@step` decorator of your step. ```python from zenml import step @step(step_operator=) def my_step(...) -> ...: ... ``` #### Specifying Per-Step Resources For additional hardware resources, specify them in your steps as detailed [here](../../how-to/pipeline-development/training-with-gpus/README.md). #### Enabling CUDA for GPU Hardware To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full GPU acceleration. ================================================== === File: docs/book/component-guide/step-operators/vertex.md === ### Google Cloud Vertex AI with ZenML **Overview**: Vertex AI provides specialized compute instances for training jobs and offers a UI for model management. ZenML's Vertex AI step operator enables execution of individual pipeline steps on Vertex AI compute instances. ### When to Use - Use the Vertex step operator if: - Your pipeline steps require computing resources beyond your orchestrator's capabilities. - You have access to Vertex AI. ### Deployment Steps 1. **Enable Vertex AI** from the Google Cloud Console. 2. **Create a Service Account** with permissions: - `roles/aiplatform.admin` for Vertex AI jobs. - `roles/storage.admin` for container registry access. ### Usage Requirements - Install ZenML GCP integration: ```shell zenml integration install gcp ``` - Ensure Docker is installed and running. - Enable Vertex AI and obtain a service account file. - Set up a GCR container registry. - (Optional) Specify a machine type (default: `n1-standard-4`). - Configure a remote artifact store for read/write access. ### Authentication Methods 1. **Using `gcloud` CLI**: ```shell gcloud auth login zenml step-operator register --flavor=vertex --project= --region= ``` 2. **Service Account Key File**: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account_path= ``` 3. **GCP Service Connector** (recommended): ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ zenml step-operator register --flavor=vertex --region= zenml step-operator connect --connector ``` ### Registering the Step Operator Add the step operator to the active stack: ```shell zenml stack update -s ``` ### Defining Steps Use the registered step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image for execution in Vertex AI. ### Additional Configuration Specify service account, network, and reserved IP ranges: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account= --network= --reserved_ip_ranges= ``` ### Custom Settings Use `VertexStepOperatorSettings` for additional configurations: ```python from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, machine_type="n1-standard-2", disk_type="pd-ssd", disk_size_gb=100, )}) def trainer(...) -> ...: """Train a model.""" ``` ### GPU Configuration To enable CUDA for GPU usage, follow specific instructions for GPU settings. ### Using Persistent Resources To speed up development: 1. Create a persistent resource via GCP UI. 2. Ensure the step operator is configured with the appropriate service account. 3. Use the persistent resource in your code: ```python @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( persistent_resource_id="my-persistent-resource", machine_type="n1-standard-4", accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, )}) def trainer(...) -> ...: """Train a model.""" ``` **Note**: Persistent resources incur costs while running, even when idle. Monitor usage accordingly. ================================================== === File: docs/book/component-guide/step-operators/custom.md === ### Developing a Custom Step Operator in ZenML #### Overview To create a custom step operator in ZenML, familiarize yourself with the general guide on writing custom component flavors. #### Base Abstraction The `BaseStepOperator` is an abstract class for executing pipeline steps in a separate environment. It provides a basic interface with the following key components: ```python from abc import ABC, abstractmethod from typing import List, Type from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig, Flavor from zenml.config.step_run_info import StepRunInfo class BaseStepOperatorConfig(StackComponentConfig): """Base config for step operators.""" class BaseStepOperator(StackComponent, ABC): """Base class for ZenML step operators.""" @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: """Executes a step synchronously.""" ``` #### Creating a Custom Step Operator To build your custom flavor: 1. **Subclass `BaseStepOperator`**: Implement the `launch` method to set up the execution environment and run the entrypoint command. - Ensure the environment has the necessary `pip` dependencies and source code. - Use `info.pipeline.docker_settings` for Docker requirements. 2. **Handle Resources**: If your operator allows per-step resources, manage them using `info.config.resource_settings`. 3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for any custom parameters. 4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, implementing the `name` property and pointing to your implementation class. 5. **Register the Flavor**: Use the CLI to register your flavor: ```shell zenml step-operator flavor register ``` For example: ```shell zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor ``` #### Important Considerations - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, list available flavors with: ```shell zenml step-operator flavor list ``` #### Workflow Integration - The `CustomStepOperatorFlavor` is used during flavor creation. - The `CustomStepOperatorConfig` validates user inputs during registration. - The `CustomStepOperator` is utilized when the component is in action, allowing separation of configuration and implementation. #### GPU Support For GPU-backed operations, follow the instructions to enable CUDA for optimal performance. This summary encapsulates the essential steps and considerations for developing a custom step operator in ZenML, ensuring clarity and conciseness while retaining critical details. ================================================== === File: docs/book/component-guide/alerters/slack.md === ### Slack Alerter Documentation Summary **Overview**: The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. #### Setup Instructions 1. **Create a Slack App**: - Set up a Slack workspace and create a Slack App with a bot. - Grant the following permissions in the `OAuth & Permissions` tab: - `chat:write` - `channels:read` - `channels:history` - Invite the app to your channel using `/invite` or through channel settings. 2. **Registering Slack Alerter in ZenML**: - Install the Slack integration: ```shell zenml integration install slack -y ``` - Create a secret and register the alerter: ```shell zenml secret create slack_token --oauth_token= zenml alerter register slack_alerter \ --flavor=slack \ --slack_token={{slack_token.oauth_token}} \ --slack_channel_id= ``` - Add the alerter to your stack: ```shell zenml stack register ... -al slack_alerter --set ``` #### Usage 1. **Direct Methods**: - Use `post()` and `ask()` methods: ```python from zenml import pipeline, step from zenml.client import Client @step def post_statement() -> None: Client().active_stack.alerter.post("Step finished!") @step def ask_question() -> bool: return Client().active_stack.alerter.ask("Should I continue?") @pipeline(enable_cache=False) def my_pipeline(): post_statement() ask_question() if __name__ == "__main__": my_pipeline() ``` 2. **Custom Settings**: - Use custom channel IDs: ```python @step(settings={"alerter": {"slack_channel_id": }}) def post_statement() -> None: Client().active_stack.alerter.post("Posting to another channel!") ``` 3. **Advanced Message Formatting**: - Utilize `SlackAlerterParameters` and `SlackAlerterPayload`: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client from zenml.integrations.slack.alerters.slack_alerter import ( SlackAlerterParameters, SlackAlerterPayload ) @step def post_statement() -> None: params = SlackAlerterParameters( payload=SlackAlerterPayload( pipeline_name=get_step_context().pipeline.name, step_name=get_step_context().step_run.name, stack_name=Client().active_stack.name, ), ) Client().active_stack.alerter.post( message="This is a message with additional information.", params=params ) ``` 4. **Predefined Steps**: - Use built-in steps for simplicity: ```python from zenml import pipeline from zenml.integrations.slack.steps import ( slack_alerter_post_step, slack_alerter_ask_step ) @pipeline(enable_cache=False) def my_pipeline(): slack_alerter_post_step("Posting a statement.") slack_alerter_ask_step("Asking a question. Should I continue?") if __name__ == "__main__": my_pipeline() ``` For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). ================================================== === File: docs/book/component-guide/alerters/alerters.md === ### Alerters Overview **Alerters** enable sending messages to chat services (e.g., Slack, Discord, Mattermost) from pipelines, facilitating immediate notifications for failures, monitoring, and human-in-the-loop ML. #### Available Alerter Integrations | Alerter | Flavor | Integration | Notes | |---------|----------|-------------|---------------------------------| | Slack | `slack` | `slack` | Interacts with a Slack channel | | Discord | `discord`| `discord` | Interacts with a Discord channel | | Custom | _custom_ | | Extend the alerter abstraction | To view available alerter flavors, use: ```shell zenml alerter flavor list ``` #### Using Alerters with ZenML 1. **Register an Alerter**: ```shell zenml alerter register ... ``` 2. **Add to Stack**: ```shell zenml stack register ... -al ``` 3. **Import and Use**: Import standard steps from the integration to use in your pipelines. ================================================== === File: docs/book/component-guide/alerters/discord.md === ### Discord Alerter Overview The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two main steps: 1. **`discord_alerter_post_step`**: Posts a message to a Discord channel and returns success status. 2. **`discord_alerter_ask_step`**: Posts a message and waits for user feedback, returning `True` only if the user approves the action. #### Use Cases - Immediate notifications for failures (e.g., model performance issues). - Human-in-the-loop integration for critical steps (e.g., model deployment). ### Requirements To use the `DiscordAlerter`, install the Discord integration: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot 1. Create a Discord workspace and channel. 2. Create a Discord App with a bot. Obtain the bot token (reset if needed) and ensure it has permissions to send/receive messages. ### Registering a Discord Alerter Register the `discord` alerter with the following command, replacing placeholders: ```shell zenml alerter register discord_alerter \ --flavor=discord \ --discord_token= \ --default_discord_channel_id= ``` Add the alerter to your stack: ```shell zenml stack register ... -al discord_alerter ``` #### Obtaining IDs - **DISCORD_CHANNEL_ID**: Right-click the channel and select 'Copy Channel ID' (enable Developer Mode if not visible). - **DISCORD_TOKEN**: Follow instructions to set up the bot and find its token. #### Permissions Required for the Bot - Read Messages/View Channels - Send Messages - Send Messages in Threads ### Using the Discord Alerter Import the steps in your pipeline and format messages as needed. Example usage: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @step def my_formatter_step(artifact_to_be_communicated) -> str: return f"Here is my artifact {artifact_to_be_communicated}!" @pipeline def my_pipeline(...): ... artifact_to_be_communicated = ... message = my_formatter_step(artifact_to_be_communicated) approved = discord_alerter_ask_step(message) ... # Subsequent steps may vary based on `approved` if __name__ == "__main__": my_pipeline() ``` For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). ================================================== === File: docs/book/component-guide/alerters/custom.md === ### Develop a Custom Alerter #### Overview To create a custom alerter in ZenML, it's essential to understand the general guide on writing custom component flavors. #### Base Abstraction The base class for alerters, `BaseAlerter`, defines two abstract methods: - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service, returning `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. ```python class BaseAlerter(StackComponent, ABC): def post(self, message: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True def ask(self, question: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True ``` #### Steps to Create a Custom Alerter 1. **Implement the Alerter Class**: Inherit from `BaseAlerter` and implement the `post()` and `ask()` methods. ```python from typing import Optional from zenml.alerter import BaseAlerter, BaseAlerterStepParameters class MyAlerter(BaseAlerter): def post(self, message: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return "Hey, I implemented an alerter." def ask(self, question: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return True ``` 2. **Create a Configuration Class** (if needed): ```python from zenml.alerter.base_alerter import BaseAlerterConfig class MyAlerterConfig(BaseAlerterConfig): my_param: str ``` 3. **Define a Flavor Class**: ```python from typing import Type, TYPE_CHECKING from zenml.alerter import BaseAlerterFlavor if TYPE_CHECKING: from zenml.stack import StackComponent, StackComponentConfig class MyAlerterFlavor(BaseAlerterFlavor): @property def name(self) -> str: return "my_alerter" @property def config_class(self) -> Type[StackComponentConfig]: from my_alerter_config import MyAlerterConfig return MyAlerterConfig @property def implementation_class(self) -> Type[StackComponent]: from my_alerter import MyAlerter return MyAlerter ``` #### Register the Custom Alerter Use the CLI to register your new flavor: ```shell zenml alerter flavor register ``` For example: ```shell zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, list available alerter flavors with: ```shell zenml alerter flavor list ``` #### Workflow Integration - The `MyAlerterFlavor` is used when creating the flavor via CLI. - The `MyAlerterConfig` is utilized during stack component registration for validation. - The `MyAlerter` is invoked when the component is in use, allowing separation of configuration and implementation. This structure supports registering flavors and components independently of their implementation dependencies. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === ### Azure Blob Storage Artifact Store in ZenML The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store artifacts. It is suitable for scenarios where local storage is inadequate, such as when sharing results, handling remote components, or managing large-scale MLOps. #### When to Use Azure Artifact Store - **Team Collaboration**: Share pipeline results with team members or stakeholders. - **Remote Components**: Integrate with remote orchestrators (e.g., Kubeflow). - **Storage Limitations**: Overcome local storage constraints. - **Production Needs**: Support production-grade MLOps. #### Deployment Steps 1. **Install Azure Integration**: ```shell zenml integration install azure -y ``` 2. **Register Azure Artifact Store**: - The root path URI must point to an Azure Blob Storage container in the format `az://container-name` or `abfs://container-name`. ```shell zenml artifact-store register az_store -f azure --path=az://container-name zenml stack register custom_stack -a az_store ... --set ``` #### Authentication Methods - **Implicit Authentication**: Quick local setup without explicit credentials. Set environment variables for Azure account key, connection string, or service principal credentials. - **Azure Service Connector**: Recommended for better security and integration with multiple Azure resources. ```shell zenml service-connector register --type azure -i ``` Non-interactive example: ```shell zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id ``` #### Connecting Azure Artifact Store After setting up an Azure Service Connector: ```shell zenml artifact-store register -f azure --path='az://your-container' zenml artifact-store connect -i ``` Non-interactive connection: ```shell zenml artifact-store connect --connector ``` #### Using ZenML Secrets for Credentials You can create a ZenML Secret to store Azure credentials: ```shell zenml secret create az_secret --account_name='' --account_key='' ``` Then register the artifact store with the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret zenml stack register custom_stack -a az_store ... --set ``` #### Conclusion Using the Azure Artifact Store in ZenML is similar to other artifact stores, with the added benefit of leveraging Azure Blob Storage for scalability and collaboration. For detailed implementation and configuration, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === ### Summary: Storing Artifacts in an AWS S3 Bucket with ZenML **Overview**: The S3 Artifact Store is a ZenML integration that utilizes AWS S3 or compatible services (e.g., MinIO, Ceph RGW) to store artifacts. It is suitable for projects requiring shared access, remote components, or scalable storage solutions. #### When to Use S3 Artifact Store - Share pipeline results with team members or stakeholders. - Integrate with remote components (e.g., Kubeflow, Kubernetes). - Overcome local storage limitations. - Handle production-grade MLOps at scale. #### Deployment Steps 1. **Install S3 Integration**: ```shell zenml integration install s3 -y ``` 2. **Register S3 Artifact Store**: - Required: S3 bucket URI (format: `s3://bucket-name`). ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name zenml stack register custom_stack -a s3_store ... --set ``` 3. **Authentication**: - **Implicit Authentication**: Quick local setup using AWS CLI credentials. - **AWS Service Connector (Recommended)**: For better security and access control. ```shell zenml service-connector register --type aws -i zenml service-connector register --type aws --resource-type s3-bucket --resource-name --auto-configure ``` 4. **Connect Artifact Store**: ```shell zenml artifact-store connect -i ``` 5. **Using ZenML Secret**: - Store AWS access key in a ZenML secret for authentication. ```shell zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret ``` #### Advanced Configuration - Customize connection using `client_kwargs`, `config_kwargs`, and `s3_additional_kwargs` for parameters like `endpoint_url` and `ServerSideEncryption`. ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` #### Usage Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For detailed information, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/local.md === ### Local Artifact Store The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that stores artifacts in a local filesystem folder. #### Use Cases - Ideal for beginners or experimental phases with ZenML. - No need for additional resources or managed object-store services (e.g., Amazon S3, Google Cloud Storage). - Not suitable for production due to limitations in sharing, accessibility, and lack of features like high availability and scalability. #### Limitations - Only compatible with local Orchestrators (e.g., local, local Kubeflow, local Kubernetes) and local Model Deployers (e.g., MLflow). - Step Operators cannot be used with a local Artifact Store as they require remote environments. #### Deployment The default ZenML stack includes a local Artifact Store: ```shell $ zenml stack list $ zenml artifact-store describe ``` Artifacts are stored at the path specified in the output (e.g., `/home/stefan/.config/zenml/local_stores/...`). You can create additional local Artifact Store instances: ```shell # Register a local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set ``` **Note:** The local Artifact Store accepts a `path` parameter during registration, but using the default path is recommended to avoid issues with local stack component dependencies. For further details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). #### Usage Using the local Artifact Store is similar to using any other Artifact Store flavor. ================================================== === File: docs/book/component-guide/artifact-stores/gcp.md === ### Google Cloud Storage (GCS) Artifact Store The GCS Artifact Store is a component of ZenML that utilizes Google Cloud Storage (GCS) for storing ZenML artifacts. It is suitable for scenarios where local storage is inadequate, such as when sharing results, integrating with remote components, or scaling MLOps pipelines. #### When to Use GCS Artifact Store - Sharing pipeline results with team members or stakeholders. - Integrating with remote components (e.g., Kubeflow, Kubernetes). - Overcoming local storage limitations. - Handling production-grade MLOps at scale. #### Deployment Steps 1. **Install GCP Integration**: ```shell zenml integration install gcp -y ``` 2. **Register GCS Artifact Store**: The mandatory configuration is the GCS bucket URI in the format `gs://bucket-name`. ```shell zenml artifact-store register gs_store -f gcp --path=gs://bucket-name zenml stack register custom_stack -a gs_store ... --set ``` #### Authentication Methods - **Implicit Authentication**: Quick setup using local Google Cloud CLI credentials. Requires the CLI to be installed. Limited functionality with remote components. - **GCP Service Connector (Recommended)**: Provides better security and configuration management. Register a service connector: ```shell zenml service-connector register --type gcp -i ``` For a specific GCS bucket: ```shell zenml service-connector register --type gcp --resource-type gcs-bucket --resource-name --auto-configure ``` #### Connecting GCS Artifact Store After setting up the service connector, register and connect the GCS Artifact Store: ```shell zenml artifact-store register -f gcp --path='gs://your-bucket' zenml artifact-store connect -i ``` For non-interactive connection: ```shell zenml artifact-store connect --connector ``` #### Using GCP Credentials You can use a GCP Service Account Key stored in a ZenML secret for authentication: 1. Create a GCP Service Account and key. 2. Store the key in a ZenML secret: ```shell zenml secret create gcp_secret --token=@path/to/service_account_key.json ``` 3. Register the GCS Artifact Store with the secret: ```shell zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret ``` #### Usage Using the GCS Artifact Store is similar to other Artifact Store flavors, with artifacts stored in GCP Cloud Storage. For detailed information, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/artifact-stores.md === # Artifact Stores Overview ## Description The Artifact Store is a critical component in an MLOps stack, serving as the data persistence layer for artifacts generated or ingested by machine learning pipelines. ZenML automatically serializes and saves various artifacts (datasets, models, validation reports) in the Artifact Store, enabling features like caching, lineage tracking, and reproducibility. ### Key Points - **Materializers**: Determine how artifacts are serialized, deserialized, and stored. Most default Materializers use the active Stack's Artifact Store. - **Custom Storage**: Users can create custom Materializers for specific artifact types or extend the Artifact Store abstraction for different storage backends. - **Integration**: The Artifact Store can also support specialized components, like the Great Expectations Data Validator. ## Usage The Artifact Store must be configured in all ZenML stacks to store artifacts from pipeline runs. ### Artifact Store Flavors ZenML provides several built-in Artifact Stores: | Store Type | Flavor | Integration | URI Schema | Notes | |------------|--------|-------------|-------------|-------| | Local | `local`| Built-in | None | Default, for local filesystem use only. | | Amazon S3 | `s3` | `s3` | `s3://` | Uses AWS S3 for object storage. | | Google Cloud Storage | `gcp` | `gcp` | `gs://` | Uses Google Cloud Storage. | | Azure | `azure`| `azure` | `abfs://`, `az://` | Uses Azure Blob Storage. | | Custom | _custom_| | _custom_ | Extend the Artifact Store abstraction. | To list available flavors: ```shell zenml artifact-store flavor list ``` ### Registering an Artifact Store Each Artifact Store requires a `path` attribute: ```shell zenml artifact-store register s3_store -f s3 --path s3://my_bucket ``` ## Interacting with the Artifact Store Typically, users interact with higher-level APIs to store and retrieve artifacts. However, direct interaction with the low-level Artifact Store API is necessary for custom Materializers or storing custom objects. ### Artifact Store API All ZenML Artifact Stores implement a standard IO API, resembling a file system. Access is facilitated through: - `zenml.io.fileio`: Low-level utilities for manipulating Artifact Store objects. - `zenml.utils.io_utils`: Higher-level utilities for transferring objects between the Artifact Store and local storage. ### Example Code 1. **Writing to the Artifact Store**: ```python import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") fileio.makedirs(os.path.dirname(artifact_uri)) with fileio.open(artifact_uri, "w") as f: f.write("example artifact") ``` 2. **Reading from the Artifact Store**: ```python from zenml.client import Client from zenml.utils import io_utils root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) ``` 3. **Using Temporary Files**: ```python import tempfile import os from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: # Save to temporary file and copy to artifact store fileio.copy(f.name, artifact_uri) ``` This summary captures the essential details of setting up and using the Artifact Store in ZenML, including its purpose, configuration, and interaction methods. ================================================== === File: docs/book/component-guide/artifact-stores/custom.md === ### Summary: Developing a Custom Artifact Store in ZenML ZenML provides built-in Artifact Store implementations for local and cloud storage (AWS, GCP, Azure). To create a custom Artifact Store, follow these steps: #### Base Abstraction The `BaseArtifactStore` class is central to ZenML's stack architecture. Key components include: 1. **Configuration Parameter**: The `path` parameter indicates the root path of the artifact store. 2. **Supported Schemes**: The `SUPPORTED_SCHEMES` variable must be defined in subclasses to specify supported file path schemes (e.g., `{"abfs://", "az://"}` for Azure). 3. **Abstract Methods**: Implement the following abstract methods in your Artifact Store flavor: - `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. #### Example Implementation ```python from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig from typing import Any, List, Set, Tuple, Type, Union PathType = Union[bytes, str] class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: Set[str] class BaseArtifactStore(StackComponent): @abstractmethod def open(self, name: PathType, mode: str = "r") -> Any: ... @abstractmethod def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... @abstractmethod def exists(self, path: PathType) -> bool: ... @abstractmethod def glob(self, pattern: PathType) -> List[PathType]: ... @abstractmethod def isdir(self, path: PathType) -> bool: ... @abstractmethod def listdir(self, path: PathType) -> List[PathType]: ... @abstractmethod def makedirs(self, path: PathType) -> None: ... @abstractmethod def mkdir(self, path: PathType) -> None: ... @abstractmethod def remove(self, path: PathType) -> None: ... @abstractmethod def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... @abstractmethod def rmtree(self, path: PathType) -> None: ... @abstractmethod def stat(self, path: PathType) -> Any: ... @abstractmethod def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: ... class BaseArtifactStoreFlavor(Flavor): @property @abstractmethod def name(self) -> Type["BaseArtifactStore"]: ... @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @property def config_class(self) -> Type[StackComponentConfig]: return BaseArtifactStoreConfig @property @abstractmethod def implementation_class(self) -> Type["BaseArtifactStore"]: ... ``` #### Registering Your Custom Artifact Store 1. Create a class inheriting from `BaseArtifactStore` and implement the abstract methods. 2. Create a class inheriting from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. 3. Inherit from `BaseArtifactStoreFlavor` to combine both classes. Register your flavor using: ```shell zenml artifact-store flavor register ``` Example: ```shell zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor ``` #### Important Considerations - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - The custom artifact store must support authentication without relying on local environment variables, especially for deployed instances. - Install necessary dependencies in the deployment environment. #### Visualizations ZenML saves visualizations alongside artifacts. Ensure your custom store can authenticate and access these artifacts to display visualizations in the ZenML dashboard. For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). ================================================== === File: docs/book/component-guide/feature-stores/feature-stores.md === # Feature Stores Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between both. They provide a centralized registry for features and feature schemas, addressing the challenge of train-serve skew, where training and inference data diverge. ### When to Use It Feature stores are optional components in the ZenML Stack, suitable for: - Productionalizing new features - Reusing existing features across pipelines and models - Ensuring consistency between training and serving data - Providing a central registry for features and schemas ### Available Feature Stores ZenML integrates with various feature stores, notably: | Feature Store | Flavor | Integration | Notes | |------------------------------|---------|-------------|-----------------------------------------------| | [FeastFeatureStore](feast.md)| `feast` | `feast` | Connects ZenML with existing Feast | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom feature store implementations| To view available feature store flavors, use: ```shell zenml feature-store flavor list ``` ### How to Use It The feature store implementation in ZenML is based on the Feast integration. For usage details, refer to the [Feast documentation](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === ### Summary of Feast Feature Store Documentation **Feast Overview** Feast (Feature Store) is an operational data system designed for managing and serving machine learning features to production models. It supports both low-latency online stores for real-time predictions and offline stores for batch scoring or model training. **Use Cases** Feast enables: - Access to offline/batch data for model training. - Access to online data during inference. **Deployment** To deploy Feast with ZenML: 1. Ensure you have a Feast feature store. 2. Install the Feast integration in ZenML: ```shell zenml integration install feast ``` 3. Register your feature store: ```shell zenml feature-store register feast_store --flavor=feast --feast_repo="" zenml stack register ... -f feast_store ``` **Usage** To retrieve features from a registered feature store, create a step that interfaces with Feast: ```python from datetime import datetime import pandas as pd from zenml import step from zenml.client import Client @step def get_historical_features(entity_dict, features, full_feature_names=False) -> pd.DataFrame: feature_store = Client().active_stack.feature_store if not feature_store: raise DoesNotExistException("Feast feature store component is not available.") entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] entity_df = pd.DataFrame.from_dict(entity_dict) return feature_store.get_historical_features(entity_df=entity_df, features=features, full_feature_names=full_feature_names) entity_dict = { "driver_id": [1001, 1002, 1003], "label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), datetime(2021, 4, 12, 16, 40, 26).isoformat(), ], } features = [ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", ] @pipeline def my_pipeline(): my_features = get_historical_features(entity_dict, features) ... ``` **Limitations** - Online data retrieval is supported locally but not in deployed models. - Pydantic limitations restrict the use of complex data types like `DataFrame` and `datetime`. For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== === File: docs/book/component-guide/feature-stores/custom.md === ### Develop a Custom Feature Store **Overview**: Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between the two. They also provide a centralized registry for features and feature schemas for team or organizational use. **Important Note**: The base abstraction for feature stores is currently in development, and extensions are not possible at this time. Users are encouraged to refer to the list of available feature stores for immediate use. **Additional Resource**: Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. ================================================== === File: docs/book/component-guide/annotators/annotators.md === ### Summary of Annotators in ZenML **Overview**: Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They support iterative annotation processes, integrating labeling into the ML lifecycle. **Key Use Cases**: 1. **Initial Data Labeling**: Start with unlabeled data to bootstrap models; label data, train models, and iteratively improve labels. 2. **Ongoing Labeling**: Regularly annotate new incoming data to maintain model accuracy and adapt to changes. 3. **Inference Samples**: Store and label predictions on real-world data for comparison and potential retraining. 4. **Ad Hoc Interventions**: Identify and correct bad labels or address class imbalances through targeted annotation. **Core Features**: - Seamless integration of labels in training steps. - Versioning of annotation data. - Conversion of annotation data to/from custom formats. - Generation of UI config files for web annotation tools. **Available Annotators**: ZenML supports various annotators through integrations: | Annotator | Flavor | Integration | Notes | |-----------|--------|-------------|-------| | ArgillaAnnotator | `argilla` | `argilla` | Connects ZenML with Argilla | | LabelStudioAnnotator | `label_studio` | `label_studio` | Connects ZenML with Label Studio | | PigeonAnnotator | `pigeon` | `pigeon` | For image/text classification in notebooks only | | ProdigyAnnotator | `prodigy` | `prodigy` | Connects ZenML with Prodigy | | Custom Implementation | _custom_ | | Extend the annotator abstraction | To view available annotator flavors: ```shell zenml annotator flavor list ``` **Usage**: The annotator implementation is based on the Label Studio integration. For detailed usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). Note that Pigeon has limited functionality. **Terminology**: - ZenML uses "Dataset" to refer to a collection of annotations/tasks, aligning with common usage but differing from Label Studio's "Project". - The term "tasks" is used for the combination of an annotation and its source data, consistent across ZenML and Label Studio. This concise overview encapsulates the essential information about the annotators in ZenML, enabling effective understanding and application in ML workflows. ================================================== === File: docs/book/component-guide/annotators/prodigy.md === ### Prodigy Overview Prodigy is a paid annotation tool for creating training and evaluation data for machine learning models. It allows for data inspection, cleaning, error analysis, and the development of rule-based systems. The Prodigy Python library offers pre-built workflows and command-line commands, enabling customization of data loading, annotation questions, and front-end behavior. ### Usage Context Prodigy is useful when labeling data as part of a machine learning workflow. It can be integrated into a ZenML stack as an optional annotator component. ### Deployment Steps 1. **Install Prodigy**: Requires a license and the `urllib3<2` dependency. Refer to the [Prodigy installation guide](https://prodi.gy/docs/install). 2. **Register Prodigy with ZenML**: ```shell zenml integration export-requirements --output-file prodigy-requirements.txt prodigy zenml annotator register prodigy --flavor prodigy ``` Optionally, use `--custom_config_path` to specify a custom configuration file. 3. **Update ZenML Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` ### Using Prodigy Prodigy does not require pre-starting the annotator. Use it according to the [Prodigy documentation](https://prodi.gy) and access labeled data via ZenML's Python methods or CLI commands. - **List Datasets**: ```shell zenml annotator dataset list ``` - **Annotate Dataset**: ```shell zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" ``` ### Importing Annotations in ZenML In a ZenML step, you can import annotations as follows: ```python from typing import List, Dict, Any from zenml import step from zenml.client import Client @step def import_annotations() -> List[Dict[str, Any]]: zenml_client = Client() annotations = zenml_client.active_stack.annotator.get_labeled_data(dataset_name="my_dataset") return annotations ``` ### Prodigy Annotator Component The Prodigy annotator component inherits from `BaseAnnotator`, requiring core methods for dataset registration and annotation export. It supports additional Prodigy-specific functionalities for dataset management and annotation retrieval. ================================================== === File: docs/book/component-guide/annotators/label-studio.md === ### Label Studio Integration with ZenML **Overview**: Label Studio is an open-source annotation platform for various data types, including computer vision, audio, text, and time series. It is used to create datasets for machine learning (ML) workflows. **Supported Annotation Types**: - **Computer Vision**: Image classification, object detection, semantic segmentation - **Audio & Speech**: Classification, speaker diarization, emotion recognition, transcription - **Text/NLP**: Classification, Named Entity Recognition (NER), question answering, sentiment analysis - **Time Series**: Classification, segmentation, event recognition - **Multi-Modal**: Dialogue processing, OCR, time series with reference **Deployment Requirements**: - Label Studio requires a cloud artifact store (AWS S3, GCP/GCS, Azure Blob Storage). Local stacks are not supported for annotation. **Installation**: To install the Label Studio integration: ```shell zenml integration install label_studio ``` **Setting Up Label Studio**: 1. Clone the repository and start a local instance: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` 2. Access the web interface at [http://localhost:8080/](http://localhost:8080/) to obtain your API key from the user account settings. 3. Register the API key: ```shell zenml secret create label_studio_secrets --api_key="" ``` 4. Register the annotator: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` **Stack Configuration**: To create and set the annotation stack: ```shell zenml stack copy default annotation zenml stack update annotation -a zenml stack update annotation -an zenml stack set annotation ``` **Usage**: - List datasets: ```shell zenml annotator dataset list ``` - Annotate a dataset: ```shell zenml annotator dataset annotate ``` **Key Components**: - **Label Studio Annotator**: Inherits from `BaseAnnotator` and includes methods for dataset registration, annotation export, and starting the annotator daemon. - **Standard Steps**: - `LabelStudioDatasetRegistrationConfig`: For registering datasets. - `LabelStudioDatasetSyncConfig`: For syncing datasets. - `get_or_create_dataset`: Registers or retrieves a dataset. - `get_labeled_data`: Retrieves labeled data in Label Studio format. - `sync_new_data_to_label_studio`: Ensures annotations are synced with the cloud artifact store. **Helper Functions**: ZenML provides functions to generate label config strings for object detection, image classification, and OCR. These are essential for defining the annotation interface. For more details, refer to the [Label Studio documentation](https://labelstud.io/guide/tasks.html) and the [ZenML GitHub repository](https://github.com/zenml-io/zenml). ================================================== === File: docs/book/component-guide/annotators/argilla.md === ### Argilla Overview Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets, enhancing the MLOps cycle from data labeling to model monitoring. It emphasizes human-in-the-loop approaches, distinguishing itself from competitors. ### Use Cases Argilla is useful for labeling textual data within ML workflows. It supports annotation at various stages and can be deployed locally or on platforms like Hugging Face Spaces. ### Deployment To deploy Argilla with ZenML, install the integration: ```shell zenml integration install argilla ``` Register the annotator, preferably using a secret for security: ```shell zenml secret create argilla_secrets --api_key="" zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` For a deployed instance, specify the instance URL and headers if using a private Hugging Face Space: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' ``` Add components to a stack and set it as active: ```shell zenml stack copy default annotation zenml stack update annotation -an zenml stack set annotation ``` Verify with: ```shell zenml annotator dataset list ``` ### Usage Access data and annotations via the CLI: - List datasets: `zenml annotator dataset list` - Annotate a dataset: `zenml annotator dataset annotate ` ### Argilla Annotator Component The Argilla annotator inherits from `BaseAnnotator`, requiring core methods for dataset registration and state management. It supports dataset registration, annotation export, and starting the annotator daemon. ### Argilla Annotator SDK To use the SDK in Python: ```python from zenml.client import Client client = Client() annotator = client.active_stack.annotator # List dataset names dataset_names = annotator.get_dataset_names() # Get a specific dataset dataset = annotator.get_dataset("dataset_name") # Get annotations for a dataset annotations = annotator.get_labeled_data(dataset_name="dataset_name") ``` For more details, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/). ================================================== === File: docs/book/component-guide/annotators/pigeon.md === ### Pigeon Annotation Tool Overview **Pigeon** is a lightweight, open-source annotation tool for labeling data within Jupyter notebooks, supporting: - Text Classification - Image Classification - Text Captioning #### Use Cases Pigeon is ideal for: - Small to medium-sized datasets - Quick labeling tasks - Iterative labeling during ML project exploration - Collaborative labeling in Jupyter notebooks #### Deployment Steps 1. **Install Pigeon Integration**: ```shell zenml integration install pigeon ``` 2. **Register the Annotator**: ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` 3. **Update Your Stack**: ```shell zenml stack update --annotator pigeon ``` #### Usage After registration, access the annotator in your Jupyter notebook: - **Text Classification**: ```python from zenml.client import Client annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['I love this movie', 'I was really disappointed by the book'], options=['positive', 'negative'] ) ``` - **Image Classification**: ```python from zenml.client import Client from IPython.display import display, Image annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['/path/to/image1.png', '/path/to/image2.png'], options=['cat', 'dog'], display_fn=lambda filename: display(Image(filename)) ) ``` #### Dataset Management Commands - `zenml annotator dataset list` - List datasets - `zenml annotator dataset delete ` - Delete a dataset - `zenml annotator dataset stats ` - Get dataset statistics #### Output Annotations are saved as JSON files in the specified output directory, with each file named after its dataset. #### Acknowledgements Pigeon was developed by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. ================================================== === File: docs/book/component-guide/annotators/custom.md === ### Develop a Custom Annotator Before developing a custom annotator, refer to the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. **Overview:** Annotators are stack components in ZenML that facilitate data annotation within your pipelines. You can use the CLI to launch annotation, configure datasets, and retrieve statistics on labeled tasks. **Important Note:** - **Base Abstraction in Progress:** The base abstraction for annotators is under development and not yet available for extension. For immediate use, check the list of existing feature stores. This documentation provides essential insights for implementing custom annotators in ZenML while highlighting current limitations. ================================================== === File: docs/book/component-guide/model-deployers/vllm.md === ### vLLM: Deploying Your LLM Locally **Overview:** [vLLM](https://docs.vllm.ai/en/latest/) is a library designed for efficient LLM inference and serving, providing features like continuous batching, quantization (GPTQ, AWQ, INT4, INT8, FP8), and advanced techniques such as PagedAttention and speculative decoding. **When to Use vLLM:** - Deploy large language models with high throughput. - Create an OpenAI-compatible API server. **Deployment Steps:** 1. **Install vLLM Integration:** ```bash zenml integration install vllm -y ``` 2. **Register the Model Deployer:** ```bash zenml model-deployer register vllm_deployer --flavor=vllm ``` This sets up a local vLLM server running as a daemon. **Usage Example:** To deploy an LLM, you can use the `vllm_model_deployer_step` in your pipeline: ```python from zenml import pipeline from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "GPT2"]: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` **Configuration Options for `VLLMDeploymentService`:** - `model`: Name or path of the Hugging Face model. - `tokenizer`: Name or path of the Hugging Face tokenizer (defaults to model name). - `served_model_name`: API model name (defaults to model argument). - `trust_remote_code`: Trust remote code from Hugging Face. - `tokenizer_mode`: Options: ['auto', 'slow', 'mistral']. - `dtype`: Data type for model weights (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']). - `revision`: Specific model version (branch name, tag, or commit id; defaults to latest). For a practical example, refer to the [deployment pipeline](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25) using a GPT-2 model. ================================================== === File: docs/book/component-guide/model-deployers/huggingface.md === ### Summary: Deploying Models to Hugging Face Inference Endpoints **Overview:** Hugging Face Inference Endpoints allow for secure and efficient deployment of `transformers`, `sentence-transformers`, and `diffusers` models on managed, autoscaling infrastructure. This service simplifies the deployment process, eliminating the need for container and GPU management. **Use Cases:** - Deploy models on dedicated infrastructure. - Utilize a fully-managed production solution for inference. - Create production-ready APIs with minimal MLOps involvement. - Ensure cost-effectiveness by paying only for used resources. - Maintain enterprise security with offline endpoints connected to VPCs. **Installation:** To deploy models, install the Hugging Face ZenML integration: ```bash zenml integration install huggingface -y ``` **Registering the Model Deployer:** Register the Hugging Face model deployer with ZenML: ```bash zenml model-deployer register --flavor=huggingface --token= --namespace= ``` - `token`: Hugging Face authentication token. - `namespace`: Username or organization name for endpoint creation. **Updating the Stack:** Update your ZenML stack to include the model deployer: ```bash zenml stack update --model-deployer= ``` **Deployment Mechanisms:** 1. Use the pre-built `huggingface_model_deployer_step` for model deployment. 2. Perform batch inference using `HuggingFaceDeploymentService`. **Example Deployment Pipeline:** ```python from zenml import pipeline from zenml.config import DockerSettings from zenml.integrations.huggingface.services import HuggingFaceServiceConfig from zenml.integrations.huggingface.steps import huggingface_model_deployer_step docker_settings = DockerSettings(required_integrations=["huggingface"]) @pipeline(enable_cache=True, settings={"docker": docker_settings}) def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): service_config = HuggingFaceServiceConfig(model_name=model_name) huggingface_model_deployer_step(service_config=service_config, timeout=timeout) ``` **Configurable Attributes in `HuggingFaceServiceConfig`:** - `model_name`: Name of the model. - `endpoint_name`: Inference endpoint name (prefixed with `zenml-`). - `repository`: Repository name in Hugging Face. - `framework`: ML framework (e.g., `"pytorch"`). - `accelerator`: Hardware for inference (e.g., `"cpu"`). - `instance_size` and `type`: Size and type of instance for hosting. - `region`: Cloud region for the endpoint. - `vendor`: Cloud provider (e.g., `"aws"`). - `token`: Hugging Face authentication token. - `min_replica` and `max_replica`: Scaling configurations. - `task`: Supported ML task (e.g., `"text-classification"`). - `endpoint_type`: Type of endpoint (`"protected"`, `"public"`, or `"private"`). **Running Inference on a Deployed Endpoint:** Example of running inference: ```python from zenml import step, pipeline from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer from zenml.integrations.huggingface.services import HuggingFaceDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name, running) if not existing_services: raise RuntimeError("No running inference endpoint found.") return existing_services[0] @step def predictor(service: HuggingFaceDeploymentService, data: str) -> str: return service.predict(data) @pipeline def huggingface_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name, pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). ================================================== === File: docs/book/component-guide/model-deployers/databricks.md === ### Summary: Deploying Models to Databricks Inference Endpoints **Overview**: Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs, with managed and autoscaling infrastructure. It allows seamless switching between MLflow and Databricks Model Deployers without altering pipeline code. **When to Use**: Use Databricks Model Deployer if: - You are using Databricks for data and ML workloads. - You want to deploy models without managing containers or GPUs. - You require enterprise security with offline endpoints connected to your VPC. - You aim to create production-ready APIs with minimal MLOps involvement. **Deployment Steps**: 1. **Install Databricks Integration**: ```bash zenml integration install databricks -y ``` 2. **Register Model Deployer**: ```bash zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` *Note*: Create a Databricks service account for permissions and generate `client_id` and `client_secret` for authentication. 3. **Update Stack**: ```bash zenml stack update --model-deployer= ``` **Configuration Options** (`DatabricksServiceConfig`): - `model_name`: Name of the model in the Databricks Model Registry. - `model_version`: Version of the model. - `workload_size`: Size of the workload (`Small`, `Medium`, `Large`). - `scale_to_zero_enabled`: Enable/disable scale to zero feature. - `env_vars`: Environment variables for the model serving container. - `workload_type`: Type of workload (`CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, `MULTIGPU_MEDIUM`). - `endpoint_secret_name`: Secret for endpoint security. **Inference Example**: To run inference on a provisioned endpoint: ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer from zenml.integrations.databricks.services import DatabricksDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> DatabricksDeploymentService: model_deployer = DatabricksModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) if not existing_services: raise RuntimeError(f"No running endpoint found for model '{model_name}'.") return existing_services[0] @step def predictor(service: DatabricksDeploymentService, data: str) -> str: return service.predict(data) @pipeline def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "databricks_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === ### Model Deployers Overview **Model Deployment** is the process of making machine learning models available for predictions on real-world data. There are two main types of predictions: - **Batch Prediction**: Generates predictions for large datasets at once. - **Real-Time Prediction**: Generates predictions for single data points. **Model Deployers** are stack components responsible for serving models either in real-time or batch mode. They enable online serving through managed web services and API endpoints (HTTP or GRPC) for low-latency responses. Batch inference is useful for generating predictions for large datasets, typically stored in files or databases. ### Use Cases Model deployers are optional in the ZenML stack, mainly used for real-time inference in development or production environments (Kubernetes or cloud). They facilitate continuous training and deployment pipelines. ### Model Deployer Flavors ZenML offers various model deployers: - **MLflow**: Local deployment. - **BentoML**: Local or production deployment. - **Seldon Core**: Production deployment on Kubernetes. - **Hugging Face**: Deployment on Hugging Face Inference Endpoints. - **Databricks**: Deployment to Databricks Inference Endpoints. - **vLLM**: Local deployment of LLMs. - **Custom Implementation**: Extend the Artifact Store for custom deployments. **Configuration Example**: ```shell # MLflow model deployer configuration zenml model-deployer register mlflow --flavor=mlflow # Seldon Core model deployer configuration zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ --base_url=http:// ``` ### Role in ZenML Stack - **Seamless Deployment**: Deploys models across various environments efficiently. - **Lifecycle Management**: Manages model server lifecycles (start, stop, delete, update). **Core Methods**: - `deploy_model`: Deploys a model and returns a Service object. - `find_model_server`: Retrieves deployed model servers. - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage model server states. **Service Object**: Represents a deployed model server with `config` (deployment attributes) and `status` (operational status). ### Interaction with Model Deployer After deployment, interact via CLI: ```shell # List models zenml model-deployer models list # Describe a model zenml model-deployer models describe # Get prediction URL zenml model-deployer models get-url # Delete a model zenml model-deployer models delete ``` **Python Example**: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") deployer_step = pipeline_run.steps[""] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` ### Continuous Deployment ZenML integrations provide standard pipeline steps for continuous model deployment, managing deployment aspects and saving configurations in the Artifact Store for future use. ================================================== === File: docs/book/component-guide/model-deployers/bentoml.md === ### Summary: Deploying Models Locally with BentoML **BentoML Overview** BentoML is an open-source framework for serving machine learning models, allowing deployment locally, in the cloud, or on Kubernetes. The BentoML Model Deployer is a component that facilitates the deployment and management of BentoML models on a local HTTP server. **Deployment Options** - **Local Development & Production**: Use the BentoML Model Deployer for straightforward model deployment. - **Kubernetes & Cloud**: Tools like Yatai and the deprecated `bentoctl` assist in deploying models to cloud platforms such as AWS and Google Cloud. **Installation** To get started, install the necessary Python packages: ```bash zenml integration install bentoml -y ``` Register the BentoML model deployer: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` **Usage Flow** 1. **Create a BentoML Service**: Define how your model will be served. 2. **Build a Bento**: Use `bento_builder_step` or build manually. 3. **Deploy the Bundle**: Use `bentoml_model_deployer_step`. **Creating a BentoML Service** Example of a basic BentoML service for a PyTorch model: ```python import bentoml import numpy as np import torch @bentoml.service(name=SERVICE_NAME) class MNISTService: def __init__(self): self.model = bentoml.pytorch.load_model(MODEL_NAME) self.model.eval() @bentoml.api() async def predict_ndarray(self, inp: np.ndarray) -> np.ndarray: inp = np.expand_dims(inp, (0, 1)) return await self.model(torch.tensor(inp)) ``` **Building a Bento** You can build a Bento manually: ```python context = get_step_context() labels = {"model_uri": model.uri} model = load_artifact_from_response(model) bentoml.pytorch.save_model(model_name, model) bento = bentos.build(service=service, models=[model_name]) ``` **ZenML Pipeline Examples** Define a pipeline for training, building, and deploying: ```python @pipeline def bentoml_pipeline(importer, trainer, evaluator, bento_builder, deployer): model = trainer(importer()) bento = bento_builder(model=model) deployer(bento=bento) ``` **Local Deployment** Deploy the bento bundle to a local HTTP server: ```python @pipeline def bento_deployer_pipeline(): deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) ``` **Containerized Deployment** Deploy the bento bundle as a container: ```python @pipeline def bento_deployer_pipeline(): deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", deployment_type="container", image="my-custom-image") ``` **Predicting with Deployed Model** Use the BentoML client to send requests: ```python @step def predictor(inference_data, service): service.start() for img, data in inference_data.items(): prediction = service.predict("predict_ndarray", np.array(data)) ``` **From Local to Cloud with `bentoctl`** The `bentoctl` CLI (deprecated) allows deployment to cloud services like AWS Lambda and Google Cloud Run. For more details, refer to the [BentoML documentation](https://docs.bentoml.org/en/latest/guides/model-store.html#manage-models) and the [ZenML SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-bentoml/#zenml.integrations.bentoml.model_deployers.bentoml_model_deployer). ================================================== === File: docs/book/component-guide/model-deployers/custom.md === ### Custom Model Deployer in ZenML ZenML provides a `Model Deployer` stack component for deploying and managing machine-learning models. This component interacts with various deployment tools and can serve as a model registry, allowing users to manage models for online inference and perform actions like suspending, resuming, or deleting model servers. #### Key Features of Model Deployer 1. **Efficient Deployment**: It manages model deployment according to the serving infrastructure's requirements, holding necessary configuration attributes. 2. **Continuous Deployment Logic**: It updates existing model servers instead of creating new ones for each model version, utilizing the `deploy_model` method. 3. **BaseService Registry**: It acts as a registry for remote model servers, allowing re-creation of BaseService instances from external configurations. #### Interface Overview The model deployer interface includes several abstract methods for deployment and lifecycle management: ```python from abc import ABC, abstractmethod from zenml.services import BaseService, ServiceConfig from zenml.stack import StackComponent, StackComponentConfig, Flavor class BaseModelDeployerConfig(StackComponentConfig): """Base class for model deployer configurations.""" class BaseModelDeployer(StackComponent, ABC): @abstractmethod def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = 300) -> BaseService: """Deploy a model.""" @abstractmethod def perform_stop_model(self, service: BaseService, timeout: int = 300, force: bool = False) -> BaseService: """Stop a model server.""" @abstractmethod def perform_start_model(self, service: BaseService, timeout: int = 300) -> BaseService: """Start a model server.""" @abstractmethod def perform_delete_model(self, service: BaseService, timeout: int = 300, force: bool = False) -> None: """Delete a model server.""" class BaseModelDeployerFlavor(Flavor): @property @abstractmethod def name(self): """Returns the name of the flavor.""" @property def type(self) -> StackComponentType: return StackComponentType.MODEL_DEPLOYER @property def config_class(self) -> Type[BaseModelDeployerConfig]: return BaseModelDeployerConfig @property @abstractmethod def implementation_class(self) -> Type[BaseModelDeployer]: """The class that implements the model deployer.""" ``` #### Creating a Custom Model Deployer To create a custom model deployer: 1. Inherit from `BaseModelDeployer` and implement the abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. 3. Combine both by inheriting from `BaseModelDeployerFlavor` and implement the `name` property. 4. Implement a service class inheriting from `BaseService`. Register the custom flavor using the CLI: ```shell zenml model-deployer flavor register ``` For example: ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, list available flavors with: ```shell zenml model-deployer flavor list ``` This modular design allows for the separation of flavor configuration from implementation, enabling registration even when dependencies are not installed locally. ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === ### Summary of Deploying Models to Kubernetes with Seldon Core **Overview**: Seldon Core is a production-grade, source-available model serving platform designed for deploying machine learning models as REST/GRPC microservices. It includes features like monitoring, logging, model explainers, outlier detectors, and advanced deployment strategies (A/B testing, canary deployments). It simplifies real-time inference by supporting standard ML model packaging formats. **Key Points**: - **Platform Limitations**: Seldon Core model deployer integration is not supported on **MacOS**. - **Use Cases**: - Deploying models on Kubernetes. - Managing model lifecycle with zero downtime. - Advanced API endpoints (REST/GRPC). - Customizable deployment processes with advanced inference graphs. **Deployment Prerequisites**: 1. Access to a Kubernetes cluster (recommended to use a Service Connector). 2. Seldon Core must be preinstalled and running in the cluster. 3. Models need to be stored in persistent shared storage accessible from the Kubernetes cluster (e.g., AWS S3, GCS). **Installation Steps**: 1. **Install Seldon Core** on EKS: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - helm install seldon-core seldon-core-operator --repo https://storage.googleapis.com/seldon-charts --set usageMetrics.enabled=true --set istio.enabled=true --namespace seldon-system kubectl apply -f iris.yaml ``` Example `iris.yaml`: ```yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: iris-model namespace: default spec: name: iris predictors: - graph: implementation: SKLEARN_SERVER modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris name: classifier name: default replicas: 1 ``` 2. **Get Ingress Host**: ```bash export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') ``` 3. **Test Prediction API**: ```bash curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \ -H 'Content-Type: application/json' \ -d '{ "data": { "ndarray": [[1,2,3,4]] } }' ``` **Service Connector**: Use Service Connectors for authentication to Kubernetes clusters, supporting AWS, GCP, Azure, and generic Kubernetes. Register a Service Connector with: ```bash zenml service-connector register --type aws --resource-type kubernetes-cluster --resource-name --auto-configure ``` **Model Deployer Registration**: ```bash zenml model-deployer register --flavor=seldon \ --kubernetes_namespace= \ --base_url=http://$INGRESS_HOST ``` **Configuration**: - `model_name`: Name of the model. - `replicas`: Number of replicas. - `implementation`: Type of Seldon inference server (e.g., `SKLEARN_SERVER`). - `parameters`: Optional parameters for deployment. - `resources`: Resource allocation (CPU/memory). - `serviceAccount`: Name of the Service Account. **Custom Code Deployment**: Define a custom predict function to deploy pre- and post-processing code alongside the model: ```python def custom_predict(model: Any, request: Array_Like) -> Array_Like: # Custom prediction logic ``` Use `seldon_custom_model_deployer_step` to deploy: ```python seldon_custom_model_deployer_step( model=model, predict_function="", service_config=SeldonDeploymentConfig( model_name="", replicas=1, implementation="custom", resources=SeldonResourceRequirements(limits={"cpu": "200m", "memory": "250Mi"}), serviceAccountName="kubernetes-service-account", ), ) ``` **Authentication Management**: Use ZenML secrets for managing credentials for persistent storage services. Configure secrets for AWS S3, GCS, or Azure Blob Storage as needed. For additional details, refer to the [Seldon Core documentation](https://github.com/SeldonIO/seldon-core). ================================================== === File: docs/book/component-guide/model-deployers/mlflow.md === ### Summary: Deploying Models Locally with MLflow **MLflow Model Deployer Overview** - The MLflow Model Deployer is part of the ZenML stack for deploying and managing MLflow models on a local MLflow server. - Currently, it is intended for local development and not for production use. **Use Cases** - Ideal for local model deployment and real-time predictions. - Suitable for users needing simple deployment without complex infrastructure like Kubernetes. **Installation and Setup** 1. Install the MLflow integration: ```bash zenml integration install mlflow -y ``` 2. Register the MLflow model deployer: ```bash zenml model-deployer register mlflow_deployer --flavor=mlflow ``` **Deployment Process** - **Deploy a Logged Model**: - Ensure the model is logged in MLflow. - Use the model URI from the artifact path or model registry. Example Code for Deploying a Model: ```python from zenml import step, get_step_context from zenml.client import Client @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="An example of deploying a model using the MLflow Model Deployer", model_uri="runs://model" or "models://", workers=1, mlserver=False, timeout=DEFAULT_SERVICE_START_STOP_TIMEOUT ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` - **Deploying Without Known Model URI**: - Retrieve the current run ID and model URI using the MLflow client. Example Code: ```python from zenml import step, get_step_context from zenml.client import Client from mlflow.tracking import MlflowClient, artifact_utils @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() experiment_tracker = zenml_client.active_stack.experiment_tracker mlflow_run_id = experiment_tracker.get_run_id(...) client = MlflowClient() model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path="model") mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", model_uri=model_uri, workers=1, timeout=300, ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` **Configuration Options** - Key attributes in `MLFlowDeploymentService`: - `name`, `description`, `pipeline_name`, `pipeline_step_name`, `model_name`, `model_uri`, `workers`, `mlserver`, `timeout`. **Running Inference** 1. **Load a Deployed Service**: - Fetch the prediction service and make an inference request. Example Code: ```python import json import requests from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str) -> None: model_deployer = MLFlowModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name) service = existing_services[0] payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]} }) response = requests.post(url=service.get_prediction_url(), data=payload, headers={"Content-Type": "application/json"}) return response.json() ``` 2. **Use the Service in the Same Pipeline**: - Directly call the `predict` method on the service. Example Code: ```python from typing_extensions import Annotated import numpy as np from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step def predictor(service: MLFlowDeploymentService, data: np.ndarray) -> Annotated[np.ndarray, "predictions"]: return service.predict(data).argmax(axis=-1) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). ================================================== === File: docs/book/component-guide/container-registries/azure.md === ### Azure Container Registry Overview The Azure Container Registry (ACR) is a built-in container registry option in ZenML that utilizes Azure's services for storing container images. #### When to Use ACR - If your stack components require pulling or pushing container images. - If you have access to Azure. #### Deployment Steps 1. Go to the [Azure portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). 2. Select a subscription, resource group, location, and registry name. 3. Click `Review + Create` to create the registry. #### Registry URI Format The URI format for the Azure container registry is: ```shell .azurecr.io ``` Example: `zenmlregistry.azurecr.io` To find your registry URI: - Search for `container registries` in the Azure portal. - Use the registry name to construct the URI. #### Using ACR Prerequisites: - Docker installed and running. - Registry URI obtained from the previous section. Register the container registry: ```shell zenml container-registry register --flavor=azure --uri= zenml stack update -c ``` #### Authentication Methods Authentication is necessary for using ACR in pipelines. Options include: **1. Local Authentication (Quick Start)** - Uses local Docker client authentication. - Requires Azure CLI installed. - Log in to the registry: ```shell az acr login --name= ``` **Note:** Local authentication is not portable across environments. **2. Azure Service Connector (Recommended)** - Provides auto-configuration and best security practices. - Register a service connector: ```sh zenml service-connector register --type azure -i ``` - Non-interactive registration using Service Principal: ```sh zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id ``` #### Connecting ACR to ZenML After setting up the Azure Service Connector, register and connect the ACR: ```sh zenml container-registry register -f azure --uri= zenml container-registry connect -i ``` For non-interactive connection: ```sh zenml container-registry connect --connector ``` #### Using ACR in a ZenML Stack Register and set a stack with the new container registry: ```sh zenml stack register -c ... --set ``` #### Local Login for Docker CLI To temporarily authenticate your local Docker client: ```sh zenml service-connector login --resource-type docker-registry --resource-id ``` #### Additional Information For more details on configurable attributes of the Azure container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/github.md === ### GitHub Container Registry Overview The GitHub Container Registry is a built-in feature of ZenML that utilizes the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) for storing container images. #### When to Use - Use it if components of your stack need to pull or push container images. - Ideal for projects hosted on GitHub. For other options, refer to [container registry flavors](./container-registries.md#container-registry-flavors). #### Deployment - The GitHub container registry is enabled by default upon creating a GitHub account. #### Finding the Registry URI The URI format is: ```shell ghcr.io/ ``` Examples: - `ghcr.io/zenml` - `ghcr.io/my-username` - `ghcr.io/my-organization` #### Usage Requirements To use the GitHub container registry: - Install and run [Docker](https://www.docker.com). - Obtain the registry URI as described above. - Configure your Docker client for authentication. Follow [this guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry) to create a personal access token. #### Registering the Container Registry To register and use the container registry in your active stack: ```shell zenml container-registry register \ --flavor=github \ --uri= zenml stack update -c ``` For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/gcp.md === ### Google Cloud Container Registry Overview The Google Cloud Container Registry is integrated with ZenML and utilizes the Google Artifact Registry. **Important Notice**: Google Container Registry is being replaced by Artifact Registry. Users should transition to Artifact Registry, as Container Registry will be shut down after March 18, 2025. ### When to Use Use the GCP container registry if: - Your stack components require pulling or pushing container images. - You have access to GCP. ### Deployment Steps 1. Enable Google Artifact Registry [here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). 2. Create a Docker repository [here](https://console.cloud.google.com/artifacts). ### Registry URI Format The GCP container registry URI follows this format: ```shell -docker.pkg.dev// ``` ### Usage To use the GCP container registry, ensure: - Docker is installed and running. - You have the registry URI. Register the container registry: ```shell zenml container-registry register --flavor=gcp --uri= zenml stack update -c ``` ### Authentication Methods Authentication is required to use the GCP Container Registry: #### Local Authentication Quick setup for local environments. Requires GCP CLI installation: ```shell gcloud auth configure-docker -docker.pkg.dev ``` **Note**: Local authentication is not portable across environments. For portability, use a GCP Service Connector. #### GCP Service Connector (Recommended) Set up a GCP Service Connector for better security and convenience: ```sh zenml service-connector register --type gcp -i ``` To auto-configure: ```sh zenml service-connector register --type gcp --resource-type docker-registry --auto-configure ``` ### Connecting GCP Container Registry After setting up the Service Connector, register the GCP Container Registry: ```sh zenml container-registry register -f gcp --uri= zenml container-registry connect -i ``` For non-interactive connection: ```sh zenml container-registry connect --connector ``` ### Final Steps To use the GCP Container Registry in a ZenML Stack: ```sh zenml stack register -c ... --set ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/dockerhub.md === ### DockerHub Container Registry in ZenML **Overview**: DockerHub is a built-in container registry in ZenML for storing container images. **When to Use**: - If stack components need to pull/push container images. - If you have a DockerHub account. **Deployment**: 1. Create a DockerHub account. 2. Images will be published in a **public** repository by default. For a **private** repository, create one on DockerHub before running the pipeline. The repository name depends on the orchestrator or step operator used. **Registry URI Format**: - The URI can be in one of these formats: ``` docker.io/ ``` - Examples: ``` zenml my-username docker.io/zenml docker.io/my-username ``` **Usage**: 1. Ensure Docker is installed and running. 2. Obtain the registry URI as described above. 3. Register the container registry and update the active stack: ```shell zenml container-registry register \ --flavor=dockerhub \ --uri= zenml stack update -c ``` 4. Log in to DockerHub for image operations: ```shell docker login ``` **Additional Information**: For a complete list of configurable attributes, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/container-registries.md === ### Container Registries Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipeline code for isolated execution. #### When to Use A container registry is necessary when components of your stack need to push or pull container images, particularly for ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation for specific components to determine if a container registry is required. #### Container Registry Flavors ZenML supports several container registry flavors: - **Default Flavor**: Accepts any URI without validation; suitable for local or unsupported remote registries. - **Specific Flavors**: Validates URIs and ensures push capability. **Recommendation**: Use specific container registry flavors for better URI validation. | Container Registry | Flavor | Integration | URI Example | |--------------------|--------|-------------|-------------| | [DefaultContainerRegistry](default.md) | `default` | _built-in_ | - | | [DockerHubContainerRegistry](dockerhub.md) | `dockerhub` | _built-in_ | docker.io/zenml | | [GCPContainerRegistry](gcp.md) | `gcp` | _built-in_ | gcr.io/zenml | | [AzureContainerRegistry](azure.md) | `azure` | _built-in_ | zenml.azurecr.io | | [GitHubContainerRegistry](github.md) | `github` | _built-in_ | ghcr.io/zenml | | [AWSContainerRegistry](aws.md) | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use the command: ```shell zenml container-registry flavor list ``` ================================================== === File: docs/book/component-guide/container-registries/aws.md === ### Amazon Elastic Container Registry (ECR) Overview Amazon ECR is a container registry provided through the ZenML `aws` integration for storing container images. Use it when components of your stack need to pull or push images and you have access to AWS ECR. ### Deployment Steps 1. **Create an AWS Account**: The ECR registry is activated automatically. 2. **Create a Repository**: - Visit the [ECR website](https://console.aws.amazon.com/ecr). - Select the correct region. - Click on `Create repository` and create a private repository. ### URI Format The ECR URI format is: ``` .dkr.ecr..amazonaws.com ``` - Example URIs: - `123456789.dkr.ecr.eu-west-2.amazonaws.com` - `987654321.dkr.ecr.ap-south-1.amazonaws.com` ### Usage Requirements To use AWS ECR: - Install the ZenML `aws` integration: ```shell zenml integration install aws ``` - Ensure Docker is installed and running. - Obtain the registry URI. ### Registering the Container Registry Register the container registry and update the active stack: ```shell zenml container-registry register --flavor=aws --uri= zenml stack update -c ``` ### Authentication Methods 1. **Local Authentication** (quick setup): - Requires AWS CLI installed and configured. - Log in using: ```shell aws ecr get-login-password --region | docker login --username AWS --password-stdin ``` - Note: Not portable across environments. 2. **AWS Service Connector** (recommended): - Register using: ```shell zenml service-connector register --type aws -i ``` - Auto-configure: ```shell zenml service-connector register --type aws --resource-type docker-registry --auto-configure ``` ### Connecting the Container Registry After setting up the Service Connector, connect the AWS Container Registry: ```shell zenml container-registry connect -i ``` Or non-interactively: ```shell zenml container-registry connect --connector ``` ### Using the AWS Container Registry in a ZenML Stack Register and set a stack with the new container registry: ```shell zenml stack register -c ... --set ``` ### Local Login for Docker CLI To temporarily authenticate your local Docker client: ```shell zenml service-connector login --resource-type docker-registry ``` ### Additional Information For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === ### Developing a Custom Container Registry in ZenML #### Overview To create a custom container registry in ZenML, familiarize yourself with the [general guide on custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction ZenML's container registries have a basic abstraction consisting of a `uri` in the configuration and a non-abstract `prepare_image_push` method for validation. **Key Classes:** - **BaseContainerRegistryConfig**: Holds the configuration with a `uri`. - **BaseContainerRegistry**: Implements methods for image preparation and pushing. - **BaseContainerRegistryFlavor**: Defines the flavor structure. **Code Snippet:** ```python from abc import abstractmethod from typing import Type from zenml.enums import StackComponentType from zenml.stack import Flavor from zenml.stack.authentication_mixin import AuthenticationConfigMixin, AuthenticationMixin from zenml.utils import docker_utils class BaseContainerRegistryConfig(AuthenticationConfigMixin): uri: str class BaseContainerRegistry(AuthenticationMixin): def prepare_image_push(self, image_name: str) -> None: pass def push_image(self, image_name: str) -> str: if not image_name.startswith(self.config.uri): raise ValueError(f"Docker image `{image_name}` does not belong to container registry `{self.config.uri}`.") self.prepare_image_push(image_name) return docker_utils.push_image(image_name) class BaseContainerRegistryFlavor(Flavor): @property @abstractmethod def name(self) -> str: pass @property def type(self) -> StackComponentType: return StackComponentType.CONTAINER_REGISTRY @property def config_class(self) -> Type[BaseContainerRegistryConfig]: return BaseContainerRegistryConfig @property def implementation_class(self) -> Type[BaseContainerRegistry]: return BaseContainerRegistry ``` #### Steps to Build a Custom Container Registry 1. **Create a Custom Class**: Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push validations. 2. **Define Configuration**: Inherit from `BaseContainerRegistryConfig` for additional settings. 3. **Combine Implementation and Configuration**: Inherit from `BaseContainerRegistryFlavor`. **Registering the Flavor:** ```shell zenml container-registry flavor register ``` Example: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - After registration, list available flavors: ```shell zenml container-registry flavor list ``` #### Workflow Integration - **CustomContainerRegistryFlavor**: Used during flavor creation. - **CustomContainerRegistryConfig**: Validates user input during registration. - **CustomContainerRegistry**: Engaged when the component is in use, allowing separation of configuration from implementation. This design enables flavor registration without needing all dependencies installed locally, as long as the flavor and config are implemented separately. ================================================== === File: docs/book/component-guide/container-registries/default.md === ### Default Container Registry in ZenML **Overview**: The Default Container Registry is a built-in feature of ZenML that supports local and various remote container registries. #### When to Use - Use it for a **local** container registry or when a remote registry is not covered by other flavors. #### Local Registry URI Format To specify a local container registry, use: ```shell localhost: # Examples: localhost:5000 localhost:8000 localhost:9999 ``` #### Usage Steps 1. Ensure **Docker** is installed and running. 2. Register the container registry: ```shell zenml container-registry register --flavor=default --uri= # Update the active stack zenml stack update -c ``` #### Authentication Methods - For private registries, configure authentication. - **Local Authentication**: Quick setup using local Docker client credentials. ```shell docker login --username --password-stdin ``` *Note*: This method is not portable across environments. - **Docker Service Connector** (Recommended): Use this for better management and reusability of credentials. - Register a connector: ```sh zenml service-connector register --type docker -i ``` - Non-interactive registration: ```sh zenml service-connector register --type docker --username= --password= ``` #### Connecting to a Container Registry After setting up a Docker Service Connector: 1. Register the container registry: ```sh zenml container-registry register -f default --uri= ``` 2. Connect it: ```sh zenml container-registry connect -i # Non-interactive version zenml container-registry connect --connector ``` #### Using the Registry in a ZenML Stack To use the Default Container Registry in a stack: ```sh zenml stack register -c ... --set ``` #### Local Login for Docker CLI If you need to interact with the remote registry manually: ```sh zenml service-connector login ``` For further details and attributes of the Default Container Registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.default_container_registry.DefaultContainerRegistry). ================================================== === File: docs/book/component-guide/image-builders/local.md === ### Local Image Builder The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your machine to build container images. It employs the official Docker Python library, which accesses authentication credentials from the default location: `$HOME/.docker/config.json`. To use a different directory for Docker configuration, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` Ensure the specified directory contains a `config.json` file. #### When to Use Use the Local Image Builder if: - You can install and run Docker on your client machine. - You want to use remote components requiring containerization without additional infrastructure setup. #### Deployment The Local Image Builder is included with ZenML and requires no extra setup. #### Usage Prerequisites: - Docker installed and running. - Docker client authenticated to push to the desired container registry. To register the image builder and create a new stack: ```shell zenml image-builder register --flavor=local zenml stack register -i ... --set ``` For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/gcp.md === ### Google Cloud Image Builder The Google Cloud Image Builder is part of the ZenML `gcp` integration, utilizing [Google Cloud Build](https://cloud.google.com/build) to create container images. #### When to Use Use the Google Cloud Image Builder if: - You cannot install or use [Docker](https://www.docker.com) locally. - You are already using Google Cloud Platform (GCP). - Your stack relies on other GCP components like the [GCS Artifact Store](../artifact-stores/gcp.md) or [Vertex Orchestrator](../orchestrators/vertex.md). #### Deployment Requirements To deploy the Google Cloud Image Builder: 1. Enable relevant Google Cloud Build APIs in your GCP project. 2. Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` 3. Set up a GCP Artifact Store and a GCP container registry. 4. Optionally specify the GCP project ID and service account credentials. #### Configuration Options You can customize: - The Docker image used for building (`gcr.io/cloud-builders/docker` by default). - The network for the build container. - The build timeout. #### Registering the Image Builder Register the image builder and use it in your active stack: ```shell zenml image-builder register \ --flavor=gcp \ --cloud_builder_image= \ --network= \ --build_timeout= zenml stack register -i ... --set ``` #### Authentication Methods Authentication is essential for using the GCP Image Builder: - **Local Authentication**: Quick setup using local GCP CLI credentials. Not portable across environments. - **GCP Service Connector**: Recommended for better security and reusability across stack components. Register with: ```shell zenml service-connector register --type gcp -i ``` #### Connecting the Image Builder After setting up authentication, connect the image builder: ```shell zenml image-builder connect -i ``` For a non-interactive connection: ```shell zenml image-builder connect --connector ``` #### Using GCP Credentials You can also use a GCP Service Account Key for authentication: ```shell zenml image-builder register \ --flavor=gcp \ --project= \ --service_account_path= \ --cloud_builder_image= \ --network= \ --build_timeout= zenml stack register -i ... --set ``` #### Caveats Google Cloud Build uses a default network (`cloudbuild`) that provides Application Default Credentials (ADC). This allows access to other GCP services during builds. If using private dependencies from a GCP Artifact Registry, customize your Dockerfile: ```dockerfile FROM zenmldocker/zenml:latest RUN pip install keyrings.google-artifactregistry-auth ``` **Note**: Specify the ZenML version in the base image tag for stability. ================================================== === File: docs/book/component-guide/image-builders/kaniko.md === ### Kaniko Image Builder Overview The Kaniko image builder is part of the ZenML `kaniko` integration, utilizing [Kaniko](https://github.com/GoogleContainerTools/kaniko) for building container images. #### Use Cases Use the Kaniko image builder when: - Docker installation is not possible on your client machine. - You are familiar with Kubernetes. #### Deployment Requirements 1. A deployed Kubernetes cluster. 2. ZenML `kaniko` integration installed: ```shell zenml integration install kaniko ``` 3. `kubectl` installed. 4. A remote container registry as part of your stack. 5. Optionally, a remote artifact store if you want to store the build context there. #### Registering the Image Builder To register the Kaniko image builder: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= [ --pod_running_timeout= ] # Register and activate a stack zenml stack register -i ... --set ``` For detailed attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kaniko/#zenml.integrations.kaniko.image_builders.kaniko_image_builder.KanikoImageBuilder). #### Authentication The Kaniko build pod requires authentication to: - Push to the container registry. - Pull from a private parent image registry. - Read from the artifact store if configured. **AWS Configuration:** - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to your EKS node IAM role. - Register the image builder with necessary environment variables: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' ``` **GCP Configuration:** - Enable workload identity and set up service accounts. - Register the image builder with namespace and service account: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --kubernetes_namespace= \ --service_account_name= ``` **Azure Configuration:** - Create a Kubernetes `configmap` for Docker config: ```shell kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' ``` - Register the image builder with the mounted configmap: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' ``` #### Additional Parameters You can pass additional parameters via the `executor_args` attribute: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --executor_args='["--label", "key=value"]' ``` Common flags include: - `--cache`: Disable caching (`false`). - `--cache-dir`: Directory for cached layers. - `--cleanup`: Disable cleanup of the working directory (`false`). For a complete list of flags, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). ================================================== === File: docs/book/component-guide/image-builders/aws.md === ### AWS Image Builder with ZenML **Overview**: The AWS Image Builder is a feature of the ZenML `aws` integration that utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) to create container images. #### When to Use - If you cannot install or use [Docker](https://www.docker.com) locally. - If you are already using AWS. - If your stack includes AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or the [SageMaker Orchestrator](../orchestrators/sagemaker.md). #### Deployment For quick deployment, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). #### Usage Requirements 1. Install the ZenML `aws` integration: ```shell zenml integration install aws ``` 2. Set up an [S3 Artifact Store](../artifact-stores/s3.md) for build context. 3. Optionally, configure an [AWS container registry](../container-registries/aws.md) for image storage. 4. Create an [AWS CodeBuild project](https://aws.amazon.com/codebuild) in the desired region, ideally matching the ECR region. **Example CodeBuild Configuration**: - **Source Type**: `Amazon S3` - **Bucket**: Same as the S3 Artifact Store. - **Environment Type**: `Linux Container` - **Environment Image**: `bentolor/docker-dind-awscli` - **Privileged Mode**: `false` **Service Role Permissions**: Ensure the CodeBuild service role has permissions for S3 and ECR: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::/*" }, { "Effect": "Allow", "Action": ["ecr:*"], "Resource": "arn:aws:ecr:::repository/" }, { "Effect": "Allow", "Action": ["ecr:GetAuthorizationToken"], "Resource": "*" } ] } ``` #### Authentication Methods - **Local Authentication**: Quick setup using local AWS CLI credentials. Not portable across environments. - **AWS Service Connector**: Recommended for better security and multi-component access. **Registering AWS Service Connector**: ```sh zenml service-connector register --type aws -i ``` For auto-configuration: ```sh zenml service-connector register --type aws --resource-type aws-generic --auto-configure ``` **Granting Permissions for CodeBuild**: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["codebuild:StartBuild", "codebuild:BatchGetBuilds"], "Resource": "arn:aws:codebuild:::project/" } ] } ``` **Registering the Image Builder**: ```sh zenml image-builder register \ --flavor=aws \ --code_build_project= \ --connector ``` **Connecting Image Builder to Service Connector**: ```sh zenml image-builder connect --connector ``` #### Customizing AWS CodeBuild Builds You can customize the image builder with: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables. - `implicit_container_registry_auth`: Use implicit (default) or explicit authentication for container registry access. For further details on authentication methods and configuration, refer to the respective sections in the documentation. ================================================== === File: docs/book/component-guide/image-builders/custom.md === ### Develop a Custom Image Builder To create a custom image builder in ZenML, start by understanding the concepts from the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction The `BaseImageBuilder` is the abstract class for building Docker images. It provides a basic interface: ```python from abc import ABC, abstractmethod from typing import Any, Dict, Optional, Type from zenml.container_registries import BaseContainerRegistry from zenml.image_builders import BuildContext from zenml.stack import StackComponent class BaseImageBuilder(StackComponent, ABC): @property def build_context_class(self) -> Type["BuildContext"]: return BuildContext @abstractmethod def build( self, image_name: str, build_context: "BuildContext", docker_build_options: Dict[str, Any], container_registry: Optional["BaseContainerRegistry"] = None, ) -> str: """Builds a Docker image and optionally pushes it to a registry.""" ``` #### Steps to Create a Custom Image Builder 1. **Subclass `BaseImageBuilder`**: Implement the `build` method to create a Docker image using the provided context. Handle optional registry pushing. 2. **Configuration Class**: If needed, subclass `BaseImageBuilderConfig` to add configuration parameters. 3. **Flavor Class**: Inherit from `BaseImageBuilderFlavor`, providing a name for your flavor through its abstract property. 4. **Register the Flavor**: Use the CLI to register your flavor: ```shell zenml image-builder flavor register ``` For example: ```shell zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` > **Note**: Ensure ZenML is initialized at the root of your repository to avoid resolution issues. 5. **List Available Flavors**: ```shell zenml image-builder flavor list ``` #### Important Considerations - **Flavor and Config Usage**: The `CustomImageBuilderFlavor` is used during flavor creation, while `CustomImageBuilderConfig` validates user input during registration. Custom validators can be added since `Config` objects are `pydantic` objects. - **Implementation Usage**: The `CustomImageBuilder` is utilized when the component is in use, allowing separation of configuration and implementation. #### Custom Build Context If a different build context is required, subclass `BuildContext` and override the `build_context_class` property in your image builder implementation to specify your custom context. ================================================== === File: docs/book/component-guide/image-builders/image-builders.md === ### Image Builders Overview The image builder is crucial for remote MLOps stacks, enabling the creation of container images for executing machine-learning pipelines in remote environments. #### When to Use An image builder is necessary when components of the stack need to build container images, particularly for ZenML's remote orchestrators, step operators, and some model deployers, which require Docker images. #### Image Builder Flavors ZenML provides a `local` image builder by default, which builds Docker images on the client machine. Additional flavors include: | Image Builder | Flavor | Integration | Notes | |-----------------------|----------|-------------|-------------------------------------| | [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | | [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | | [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build. | | [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build. | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom image builder implementations. | To view available image builder flavors, use the command: ```shell zenml image-builder flavor list ``` #### How to Use You do not need to interact directly with the image builder in your code. As long as the desired image builder is part of your active ZenML stack, it will be automatically utilized by any component requiring container image builds. ================================================== === File: docs/book/component-guide/experiment-trackers/wandb.md === ### Weights & Biases Experiment Tracker Overview The Weights & Biases (W&B) Experiment Tracker is integrated with ZenML to log and visualize information from pipeline steps, such as models, parameters, and metrics. It is particularly useful during the iterative ML experimentation phase and can also be utilized for automated pipeline runs. #### Use Cases - Continuation of W&B usage in MLOps workflows. - Interactive navigation of results from ZenML pipeline runs. - Sharing artifacts and metrics with teams or stakeholders. If unfamiliar with W&B, consider using another experiment tracking tool. #### Deployment To deploy the W&B Experiment Tracker, install the integration: ```shell zenml integration install wandb -y ``` **Authentication Methods:** 1. **Basic Authentication** (not recommended for production): ```shell zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb --entity= --project_name= --api_key= zenml stack register custom_stack -e wandb_experiment_tracker ... --set ``` 2. **ZenML Secret (Recommended)**: Create a secret to store credentials securely: ```shell zenml secret create wandb_secret --entity= --project_name= --api_key= ``` Register the tracker: ```shell zenml experiment-tracker register wandb_tracker --flavor=wandb --entity={{wandb_secret.entity}} --project_name={{wandb_secret.project_name}} --api_key={{wandb_secret.api_key}} ``` #### Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator: ```python import wandb from wandb.integration.keras import WandbCallback @step(experiment_tracker="") def tf_trainer(...): model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"": metric}) ``` You can dynamically reference the experiment tracker: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` #### W&B UI Each ZenML step using W&B creates a separate experiment run, accessible via the W&B UI. The tracking URL can be found in the step's metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration You can customize the W&B experiment tracker with `WandbExperimentTrackerSettings`: ```python from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings wandb_settings = WandbExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": wandb_settings}) def my_step(...): ... ``` #### Full Code Example An end-to-end example demonstrating the integration: ```python from zenml import pipeline, step from zenml.client import Client from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset import wandb experiment_tracker = Client().active_stack.experiment_tracker @step def prepare_data(): dataset = load_dataset("imdb") ... return train_dataset, eval_dataset @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) training_args = TrainingArguments(...) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": eval_results}) @pipeline(enable_cache=False) def fine_tuning_pipeline(): train_dataset, eval_dataset = prepare_data() model = train_model(train_dataset, eval_dataset) if __name__ == "__main__": wandb_settings = WandbExperimentTrackerSettings(tags=["distilbert", "imdb"]) fine_tuning_pipeline.with_options(settings={"experiment_tracker": wandb_settings})() ``` For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/experiment-trackers/vertexai.md === ### Vertex AI Experiment Tracker Overview The Vertex AI Experiment Tracker, part of the Vertex AI ZenML integration, utilizes the Vertex AI tracking service to log and visualize pipeline step information such as models, parameters, and metrics. #### When to Use - Ideal for iterative ML experimentation and transitioning to production workflows. - Recommended if already using Vertex AI or seeking a visually interactive experiment tracking solution within the Google Cloud ecosystem. - Consider other Experiment Tracker flavors if unfamiliar with Vertex AI or using other cloud providers. #### Configuration To use the Vertex AI Experiment Tracker, install the GCP integration: ```shell zenml integration install gcp -y ``` **Configuration Options:** - `project`: GCP project name (optional). - `location`: GCP location for experiments (defaults to us-central1). - `staging_bucket`: GCS bucket for staging artifacts (optional). - `service_account_path`: Path to service account JSON file (optional). **Registering the Tracker:** ```shell zenml experiment-tracker register vertex_experiment_tracker \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` #### Authentication Methods 1. **Implicit Authentication**: Quick local setup using `gcloud` CLI. 2. **GCP Service Connector (recommended)**: For better security and configuration management. To register a GCP Service Connector: ```sh zenml service-connector register --type gcp -i ``` After setting up the connector, register the tracker: ```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml experiment-tracker connect --connector ``` 3. **GCP Credentials**: Use a service account key stored in a ZenML Secret for authentication. #### Using the Tracker To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator. Use Vertex AI's logging capabilities: **Example 1: Logging Metrics** ```python from google.cloud import aiplatform class VertexAICallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): metrics = {key: value for key, value in (logs or {}).items() if isinstance(value, (int, float))} aiplatform.log_time_series_metrics(metrics=metrics, step=epoch) @step(experiment_tracker="") def train_model(...): aiplatform.autolog() model.fit(..., callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) aiplatform.log_params(...) ``` **Example 2: Uploading TensorBoard Logs** ```python @step(experiment_tracker="") def train_model(...): aiplatform.start_upload_tb_log(...) model.fit(...) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) aiplatform.log_params(...) ``` #### Experiment Tracker UI Access the Vertex AI experiment linked to a ZenML run via the step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration For further customization, use `VertexExperimentTrackerSettings` to specify an experiment name or TensorBoard instance: ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings vertexai_settings = VertexExperimentTrackerSettings( experiment="", experiment_tensorboard="TENSORBOARD_RESOURCE_NAME" ) @step(experiment_tracker="", settings={"experiment_tracker": vertexai_settings}) def step_one(...): ... ``` For more details on settings, refer to the ZenML documentation. ================================================== === File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === ### Experiment Trackers in ZenML **Overview**: Experiment trackers in ZenML allow for the logging, visualization, and comparison of ML experiments, linking pipeline runs to experiments through stack components. They provide a user-friendly interface for browsing and visualizing logged information. **Key Concepts**: - **Experiment Tracker**: An optional stack component that must be registered in your ZenML stack. It enhances the usability of the mandatory Artifact Store, which tracks pipeline artifacts programmatically. - **Usage**: Add an Experiment Tracker when you want visual features for better usability in tracking experiments. **Architecture**: Experiment trackers integrate into the ZenML stack, enhancing the overall experiment tracking capabilities. **Available Experiment Tracker Flavors**: | Tracker | Flavor | Integration | Notes | |---------|--------|-------------|-------| | Comet | `comet`| `comet` | Adds Comet tracking capabilities | | MLflow | `mlflow`| `mlflow` | Adds MLflow tracking capabilities | | Neptune | `neptune`| `neptune` | Adds Neptune tracking capabilities | | Weights & Biases | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | | Custom Implementation | _custom_ | | Custom options available | **Command to List Flavors**: ```shell zenml experiment-tracker flavor list ``` **Usage Steps**: 1. Configure and add an Experiment Tracker to your ZenML stack. 2. Enable the Experiment Tracker for specific pipeline steps using a decorator. 3. Log information (models, metrics, data) explicitly in your steps. 4. Access the Experiment Tracker UI for visualization. **Code Snippet to Access Tracker URL**: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") step = pipeline_run.steps[""] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` **Note**: Experiment trackers automatically mark runs as failed if the corresponding pipeline step fails. For detailed usage, refer to the specific documentation for the chosen Experiment Tracker flavor. ================================================== === File: docs/book/component-guide/experiment-trackers/neptune.md === ### Neptune Experiment Tracker Overview The Neptune Experiment Tracker, integrated with ZenML, allows logging and visualization of ML experiment data using [neptune.ai](https://neptune.ai/product/experiment-tracking). It is particularly useful during the iterative ML experimentation phase and for model registry in production. #### Use Cases - Continuation of existing neptune.ai usage while adopting MLOps practices via ZenML. - Enhanced visualization of ZenML pipeline results. - Sharing logged artifacts and metrics with teams or stakeholders. #### Deployment To deploy the Neptune Experiment Tracker, install the integration: ```shell zenml integration install neptune -y ``` ##### Authentication Configure the following credentials: - `api_token`: Your Neptune account API token. If left blank, it will look for environment variables. - `project`: Project name in the format "workspace-name/project-name". **ZenML Secret (Recommended)**: Store credentials securely using a ZenML secret: ```shell zenml secret create neptune_secret --api_token= ``` Register the tracker: ```shell zenml experiment-tracker register neptune_experiment_tracker \ --flavor=neptune \ --project= \ --api_token={{neptune_secret.api_token}} ``` **Basic Authentication (Not Recommended)**: Directly configure credentials (not secure): ```shell zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ --project= --api_token= ``` #### Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and fetch the Neptune run object: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step from sklearn.svm import SVC from sklearn.datasets import load_iris from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def train_model() -> SVC: iris = load_iris() model = SVC(kernel="rbf", C=1.0) model.fit(iris.data, iris.target) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model ``` #### Logging Metadata Log ZenML pipeline and step metadata: ```python from zenml import get_step_context @step(experiment_tracker="neptune_tracker") def my_step(): neptune_run = get_neptune_run() context = get_step_context() neptune_run["pipeline_metadata"] = stringify_unsupported( context.pipeline_run.get_metadata().dict() ) ``` #### Adding Tags Use `NeptuneExperimentTrackerSettings` to add tags: ```python from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) ``` #### Neptune UI Access the Neptune UI to inspect tracked experiments and metadata linked to ZenML runs. Each pipeline run is logged as a separate experiment in Neptune. ### Full Code Example Here’s a complete example of using the Neptune integration with ZenML: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step, pipeline from sklearn.model_selection import train_test_split from sklearn.svm import SVC from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def train_model() -> SVC: iris = load_iris() model = SVC(kernel="rbf", C=1.0) model.fit(iris.data, iris.target) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model @step(experiment_tracker=experiment_tracker.name) def evaluate_model(model: SVC): # Evaluation logic here... pass @pipeline def ml_pipeline(): model = train_model() evaluate_model(model) if __name__ == "__main__": ml_pipeline() ``` ### Further Reading Refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/) for additional details on using this integration. ================================================== === File: docs/book/component-guide/experiment-trackers/custom.md === ### Custom Experiment Tracker Development #### Overview To develop a custom experiment tracker in ZenML, it's recommended to first review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Currently, the base abstraction for Experiment Trackers is under development, and extending them is not advised until its release. #### Steps to Build a Custom Experiment Tracker 1. **Create a Tracker Class**: Inherit from `BaseExperimentTracker` and implement the required abstract methods. 2. **Configuration Class**: If needed, create a class inheriting from `BaseExperimentTrackerConfig` to define configuration parameters. 3. **Combine Implementation and Configuration**: Inherit from `BaseExperimentTrackerFlavor` to integrate both classes. #### Registration Register your custom flavor via the CLI using dot notation: ```shell zenml experiment-tracker flavor register ``` For example, if your flavor class is in `flavors/my_flavor.py`: ```shell zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor ``` **Note**: Ensure ZenML is initialized at the root of your repository to avoid resolution issues. #### Listing Available Flavors After registration, confirm your flavor is available: ```shell zenml experiment-tracker flavor list ``` #### Important Considerations - The **CustomExperimentTrackerFlavor** is used during flavor creation. - The **CustomExperimentTrackerConfig** is utilized for stack component registration and validation, supporting custom validators via `pydantic`. - The **CustomExperimentTracker** is engaged when the component is in use, allowing separation of configuration and implementation. This structure enables registration of flavors and components without needing all dependencies installed locally, as long as the flavor and config classes are in a separate module. ================================================== === File: docs/book/component-guide/experiment-trackers/mlflow.md === # MLflow Experiment Tracker Summary ## Overview The MLflow Experiment Tracker, part of the MLflow ZenML integration, utilizes the MLflow tracking service to log and visualize pipeline step information, such as models, parameters, and metrics. ## Use Cases Use the MLflow Experiment Tracker if: - You are already using MLflow for experiment tracking and want to integrate it with ZenML. - You seek an interactive way to navigate results from ZenML pipeline runs. - Your team has a shared MLflow Tracking service for logging artifacts and metrics. Consider other Experiment Tracker flavors if you are unfamiliar with MLflow. ## Configuration To configure the MLflow Experiment Tracker, install the integration: ```shell zenml integration install mlflow -y ``` ### Deployment Scenarios 1. **Localhost**: Requires a local Artifact Store. Not suitable for collaborative settings. ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` 2. **Remote Tracking Server**: Requires a deployed MLflow Tracking Server with authentication parameters. - Use MLflow version 2.2.1 or higher due to a critical vulnerability. 3. **Databricks**: Uses the managed MLflow Tracking server with specific authentication parameters. ### Authentication Methods Configure credentials for a remote MLflow tracking server: - `tracking_uri`: URL of the server (use "databricks" for Databricks). - `tracking_username`/`tracking_password` or `tracking_token`: For authentication. - `tracking_insecure_tls`: Optional, to skip SSL verification. - `databricks_host`: Required if using Databricks. **Basic Authentication Example**: ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ --tracking_uri= --tracking_token= ``` **ZenML Secret (Recommended)**: ```shell zenml secret create mlflow_secret --username= --password= zenml experiment-tracker register mlflow --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} ... ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and utilize MLflow's logging capabilities: ```python import mlflow @step(experiment_tracker="") def tf_trainer(x_train: np.ndarray, y_train: np.ndarray) -> tf.keras.Model: mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) return model ``` ### MLflow UI Access the MLflow UI to view tracked experiments. Retrieve the experiment URL from the step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` For local MLflow, start the UI with: ```bash mlflow ui --backend-store-uri ``` ## Additional Configuration To create nested runs or add tags, use `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) def step_one(data: np.ndarray) -> np.ndarray: ... ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.experiment_trackers.mlflow_experiment_tracker). ================================================== === File: docs/book/component-guide/experiment-trackers/comet.md === ### Comet Experiment Tracker with ZenML The Comet Experiment Tracker integrates with the Comet platform to log and visualize data from ZenML pipeline steps (models, parameters, metrics). It is beneficial during iterative ML experimentation and can also be used for automated pipeline runs. #### Use Cases - Continuation of Comet usage for existing projects transitioning to MLOps with ZenML. - Enhanced visualization of ZenML pipeline results. - Sharing artifacts and metrics with teams or stakeholders. #### Deployment To deploy the Comet Experiment Tracker, install the integration: ```bash zenml integration install comet -y ``` **Authentication Methods:** 1. **ZenML Secret (Recommended)**: Store credentials securely. ```bash zenml secret create comet_secret \ --workspace= \ --project_name= \ --api_key= ``` Configure the tracker: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} ``` 2. **Basic Authentication**: Directly configure credentials (not recommended for production). ```bash zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ --workspace= --project_name= --api_key= ``` #### Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def my_step(): experiment_tracker.log_metrics({"my_metric": 42}) experiment_tracker.experiment.log_model(...) ``` #### Comet UI Each ZenML step using Comet creates a separate experiment, viewable in the Comet UI. Access the experiment URL via step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Full Code Example Here’s a concise example of using the Comet Experiment Tracker in a ZenML pipeline: ```python from comet_ml.integration.sklearn import log_model from zenml import pipeline, step from zenml.client import Client from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.metrics import accuracy_score experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): return load_iris(return_X_y=True) @step def preprocess_data(X, y): return train_test_split(X, y, test_size=0.2, random_state=42) @step(experiment_tracker=experiment_tracker.name) def train_model(X_train, y_train): model = SVC().fit(X_train, y_train) log_model(experiment=experiment_tracker.experiment, model_name="SVC", model=model) return model @step(experiment_tracker=experiment_tracker.name) def evaluate_model(model, X_test, y_test): accuracy = accuracy_score(y_test, model.predict(X_test)) experiment_tracker.log_metrics({"accuracy": accuracy}) return accuracy @pipeline def iris_classification_pipeline(): X, y = load_data() X_train, X_test, y_train, y_test = preprocess_data(X, y) model = train_model(X_train, y_train) evaluate_model(model, X_test, y_test) if __name__ == "__main__": iris_classification_pipeline() ``` #### Additional Configuration You can pass `CometExperimentTrackerSettings` for additional tags: ```python comet_settings = CometExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": comet_settings}) def my_step(): ... ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/model-registries/model-registries.md === # Model Registries Model registries are centralized solutions for managing and tracking machine learning models throughout their development and deployment stages. They store metadata such as version, configuration, and metrics, facilitating reproducibility and streamlined management of trained models. In ZenML, model registries are Stack Components that enable easy retrieval, loading, and deployment of models, along with information on the training pipeline. ### Key Concepts - **RegisteredModel**: A logical grouping for tracking different model versions, containing the model's name, description, and tags. - **RegistryModelVersion**: A specific model version identified by a unique version number, including details like name, description, tags, metrics, and references to the pipeline name and run ID. - **ModelVersionStage**: Indicates the lifecycle state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. ### When to Use Model registries provide a visual interface for managing model metadata, making them ideal for users who need to interact with multiple models in a pipeline. They are particularly useful for centralized state management and easy retrieval of models. ### Integration with ZenML Model registries are optional components in the ZenML stack and require an experiment tracker. If an experiment tracker is not used, models can still be stored, but retrieval must be done manually from the artifact store. #### Model Registry Flavors ZenML supports various model registry integrations, including: - **MLflow**: Use MLflow as a model registry. - **Custom Implementation**: Allows for custom model registry solutions. To view available flavors, use: ```shell zenml model-registry flavor list ``` ### Usage To utilize model registries: 1. Register a model registry in your stack that matches your experiment tracker flavor. 2. Register models using one of the following methods: - Built-in pipeline step. - ZenML CLI. - Model registry UI. 3. Retrieve and load models for deployment or experimentation. For more details on fetching runs, refer to the [documentation on fetching pipelines](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). ================================================== === File: docs/book/component-guide/model-registries/custom.md === ### Summary: Developing a Custom Model Registry in ZenML #### Overview This documentation provides guidance on creating a custom model registry in ZenML, emphasizing the need to understand ZenML's component flavor concepts before implementation. #### Important Notes - The `BaseModelRegistry` is an abstract class that must be subclassed to create a custom model registry. - The API is still evolving, and users are encouraged to provide feedback on its flexibility. #### Base Abstraction The `BaseModelRegistry` class defines a basic interface for model registration and versioning, including the following key methods: ```python from abc import ABC, abstractmethod from typing import Any, Dict, List, Optional class BaseModelRegistry(ABC): @abstractmethod def register_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: pass @abstractmethod def delete_model(self, name: str) -> None: pass @abstractmethod def update_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: pass @abstractmethod def get_model(self, name: str) -> RegisteredModel: pass @abstractmethod def list_models(self, name: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> List[RegisteredModel]: pass @abstractmethod def register_model_version(self, name: str, version: Optional[str] = None, **kwargs: Any) -> RegistryModelVersion: pass @abstractmethod def delete_model_version(self, name: str, version: str) -> None: pass @abstractmethod def update_model_version(self, name: str, version: str, **kwargs: Any) -> RegistryModelVersion: pass @abstractmethod def list_model_versions(self, name: Optional[str] = None, **kwargs: Any) -> List[RegistryModelVersion]: pass @abstractmethod def get_model_version(self, name: str, version: str) -> RegistryModelVersion: pass @abstractmethod def load_model_version(self, name: str, version: str, **kwargs: Any) -> Any: pass ``` #### Steps to Create a Custom Model Registry 1. Understand core concepts of model registries. 2. Subclass `BaseModelRegistry` and implement the abstract methods. 3. Create a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig` for additional parameters. 4. Combine the implementation and configuration by inheriting from `BaseModelRegistryFlavor`. To register the custom model registry, use the CLI command: ```shell zenml model-registry flavor register ``` #### Workflow Integration - `CustomModelRegistryFlavor` is used during flavor creation. - `CustomModelRegistryConfig` is utilized for validating configurations during registration. - `CustomModelRegistry` is engaged when the component is in use, allowing separation of configuration from implementation. For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/model-registries/mlflow.md === ### MLflow Model Registry Overview MLflow is a tool for tracking experiments, managing models, and deploying them across environments. ZenML integrates with MLflow, providing an Experiment Tracker and Model Deployer. The MLflow model registry allows for managing and tracking ML models and artifacts through a user interface. #### Use Cases - Track different model versions during development and deployment. - Deploy models across various environments while monitoring version usage. - Compare model performance over time for data-driven decisions. - Simplify model deployment processes. #### Deployment Steps 1. **Install MLflow Integration**: ```shell zenml integration install mlflow -y ``` 2. **Register MLflow Model Registry Component**: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow zenml stack register custom_stack -r mlflow_model_registry ... --set ``` **Note**: The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability in older versions. #### Using the Model Registry - **Register Models in a Pipeline**: ```python from zenml import pipeline from zenml.integrations.mlflow.steps.mlflow_registry import mlflow_register_model_step @pipeline def mlflow_registry_training_pipeline(): model = ... mlflow_register_model_step(model=model, name="tensorflow-mnist-model") ``` **Parameters for `mlflow_register_model_step`**: - `name`: Required model name. - `version`: Model version. - `trained_model_name`: Artifact name in MLflow. - `model_source_uri`: Path to the model. - `description`: Model version description. - `metadata`: Associated metadata. - **Register Models via CLI**: ```shell zenml model-registry models register-version Tensorflow-model \ --description="New version with accuracy 98.88%" \ -v 1 \ --model-uri="file:///.../model" \ -m key1 value1 -m key2 value2 \ --zenml-pipeline-name="mlflow_training_pipeline" \ --zenml-step-name="trainer" ``` #### Interacting with Registered Models - **List Registered Models**: ```shell zenml model-registry models list ``` - **List Model Versions**: ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` - **Get Specific Model Version Details**: ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` - **Delete Registered Models or Versions**: ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` For further details, refer to the [ZenML MLflow SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/orchestrators/local-docker.md === ### Local Docker Orchestrator Overview The Local Docker Orchestrator is a built-in orchestrator in ZenML that allows you to run pipelines locally using Docker. #### When to Use - For running pipeline steps in isolated local environments. - For debugging pipeline issues without incurring costs from remote infrastructure. #### Deployment Requirements - Ensure Docker is installed and running. #### Usage Instructions To register and activate the local Docker orchestrator in your stack, use the following commands: ```shell zenml orchestrator register --flavor=local_docker zenml stack register -o ... --set ``` Run your ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### Additional Configuration You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes. Example of specifying CPU count for Windows: ```python from zenml import step, pipeline from zenml.orchestrators.local_docker.local_docker_orchestrator import LocalDockerOrchestratorSettings @step def return_one() -> int: return 1 settings = { "orchestrator": LocalDockerOrchestratorSettings(run_args={"cpu_count": 3}) } @pipeline(settings=settings) def simple_pipeline(): return_one() ``` #### Enabling CUDA for GPU To run steps on GPU, follow the instructions [here](../../how-to/pipeline-development/training-with-gpus/README.md) for CUDA configuration to utilize GPU acceleration effectively. ================================================== === File: docs/book/component-guide/orchestrators/lightning.md === # Lightning AI Orchestrator Summary ## Overview The Lightning AI Orchestrator, part of [Lightning AI Studio](https://lightning.ai/), integrates with ZenML to facilitate the development and deployment of AI applications on Lightning AI's infrastructure. It is designed for remote ZenML deployments and is not suitable for local setups. ## Use Cases - Quick execution of pipelines on GPU instances. - Integration with existing Lightning AI projects. - Simplified deployment and scaling of ML workflows. - Utilization of Lightning AI's optimized infrastructure for ML workloads. ## Deployment Requirements - A Lightning AI account and credentials. - No additional infrastructure deployment is necessary. ## Functionality - Archives the current ZenML repository and uploads it to Lightning AI Studio. - Uses `lightning-sdk` to create a new studio and run commands via `studio.run()` for environment setup and dependency installation. - Supports both CPU and GPU machine types, specified in `LightningOrchestratorSettings`. - Allows asynchronous pipeline execution with status checks available in ZenML Dashboard or Lightning AI Studio. ## Credentials To use the orchestrator, the following credentials are required: - `LIGHTNING_USER_ID` - `LIGHTNING_API_KEY` - Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` Credentials can be set as environment variables or during orchestrator registration. ## Installation and Registration Install the Lightning integration: ```shell zenml integration install lightning ``` Register the orchestrator: ```shell zenml orchestrator register lightning_orchestrator \ --flavor=lightning \ --user_id= \ --api_key= \ --username= \ # optional --teamspace= \ # optional --organization= # optional ``` Activate a stack with the orchestrator: ```bash zenml stack register lightning_stack -o lightning_orchestrator ... --set ``` ## Pipeline Configuration Configure the orchestrator at the pipeline level: ```python from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings lightning_settings = LightningOrchestratorSettings( main_studio_name="my_studio", machine_type="cpu", async_mode=True, custom_commands=["pip install -r requirements.txt"] ) @pipeline(settings={"orchestrator.lightning": lightning_settings}) def my_pipeline(): ... ``` ## Running Pipelines Execute a ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ## Monitoring Monitor running applications through the Lightning AI UI. Retrieve the UI URL for a specific run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ## Additional Configuration Further customize the orchestrator using `LightningOrchestratorSettings` for various execution environment aspects. Specify machine types for GPU usage: ```python lightning_settings = LightningOrchestratorSettings( machine_type="gpu" # or specific types like `A10G` ) ``` Refer to [Lightning AI documentation](https://lightning.ai/docs/overview/studios/change-gpus) for GPU-enabled machine types and specifications. ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === ### HyperAI Orchestrator Overview The **HyperAI Orchestrator** is a component of the HyperAI cloud compute platform designed for deploying AI pipelines on HyperAI instances. It is specifically intended for use in remote ZenML deployment scenarios. #### When to Use - For managed pipeline execution. - If you are a HyperAI customer. #### Prerequisites 1. A running HyperAI instance accessible via the internet with SSH key-based access. 2. A recent version of Docker with Docker Compose. 3. The appropriate [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/unix/) installed on the HyperAI instance. 4. The [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) installed and configured (optional for GPU use). #### Functionality - Utilizes Docker Compose to create and execute machine learning pipelines. - Each ZenML pipeline step is represented as a service in a generated Docker Compose file. - Ensures upstream steps are completed successfully before downstream steps run. - Can connect to the container registry for Docker image transfers. #### Scheduled Pipelines Supports two scheduling methods: - **Cron Expressions**: For recurring pipeline runs, added as crontab entries. - **Run Once**: For single scheduled runs at a specified time, added as `at` entries. #### Deployment Steps 1. Configure a **HyperAI Service Connector** in ZenML with supported authentication methods: ```shell zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= ``` 2. Register the orchestrator and activate the stack: ```shell zenml orchestrator register --flavor=hyperai zenml stack register -o ... --set ``` 3. Run ZenML pipelines using: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### GPU Usage To run steps on GPU, follow the instructions for enabling CUDA, which requires additional settings for full acceleration. This summary captures the essential technical details while omitting redundancy, ensuring clarity for users seeking to understand the HyperAI Orchestrator's functionality and setup. ================================================== === File: docs/book/component-guide/orchestrators/airflow.md === ### Airflow Orchestrator for ZenML Pipelines ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML features. Each ZenML step runs in a separate Docker container managed by Airflow. #### When to Use Airflow Orchestrator - Proven production-grade orchestrator. - Already using Airflow. - Local pipeline execution. - Willing to maintain Airflow. #### Deployment Options - **Local**: No additional setup required. - **Remote**: Requires a remote ZenML deployment. - Use ZenML GCP Terraform module or managed services like Google Cloud Composer, Amazon MWAA, or Astronomer. - Manual deployment is also an option (refer to Airflow docs). **Required Python Packages for Remote Deployment**: - `pydantic~=2.7.1` - `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` #### Using the Airflow Orchestrator 1. Install ZenML Airflow integration: ```shell zenml integration install airflow ``` 2. Ensure Docker is installed and running. 3. Register the orchestrator: ```shell zenml orchestrator register --flavor=airflow --local=True zenml stack register -o ... --set ``` **Local Setup**: - Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` - Set environment variables for Airflow configuration: - `AIRFLOW_HOME`: Default `~/airflow` - `AIRFLOW__CORE__DAGS_FOLDER`: Default `/dags` - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default 30 seconds. **Start Local Airflow Server**: ```bash airflow standalone ``` Access UI at [http://0.0.0.0:8080](http://0.0.0.0:8080). **Run ZenML Pipeline**: ```shell python file_that_runs_a_zenml_pipeline.py ``` Copy the generated `.zip` file to the Airflow DAGs directory. #### Remote Deployment Considerations - Requires a remote ZenML server, Airflow server, remote artifact store, and remote container registry. - Running a pipeline will create a `.zip` file instead of executing it directly. #### Scheduling Pipelines Schedule runs in the past: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule scheduled_pipeline = fashion_mnist_pipeline.with_options( schedule=Schedule( start_time=datetime.now() - timedelta(hours=1), end_time=datetime.now() + timedelta(hours=1), interval_second=timedelta(minutes=15), catchup=False, ) ) scheduled_pipeline() ``` #### Airflow UI Access local Airflow UI at [http://localhost:8080](http://localhost:8080). Default credentials: username `admin`, password in `/standalone_admin_password.txt`. #### Additional Configuration Use `AirflowOrchestratorSettings` for further customization. For GPU support, follow specific instructions to enable CUDA. #### Using Different Airflow Operators - **DockerOperator**: Runs on the same machine. - **KubernetesPodOperator**: Runs in a Kubernetes cluster. Specify operator settings: ```python from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( operator="docker", # or "kubernetes_pod" operator_args={} ) ``` #### Custom Operators and DAG Generator To use custom operators, specify the operator class path. For custom DAG generation, provide a module with required classes and constants. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/sagemaker.md === # AWS Sagemaker Orchestrator Summary ## Overview The **ZenML Sagemaker Orchestrator** integrates with [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) to facilitate serverless machine learning workflows on AWS. It enables quick, production-ready pipeline execution with minimal setup. ### When to Use - Already using AWS. - Need a production-grade orchestrator with UI tracking. - Require a managed, serverless solution for pipelines. ## Functionality The orchestrator creates a SageMaker `PipelineStep` for each ZenML pipeline step, which can involve Sagemaker Processing or Training jobs. ## Deployment Requirements 1. **Deploy ZenML to the cloud** (preferably in the same region as Sagemaker). 2. **Connect to the remote ZenML server**. 3. **Enable IAM permissions** for the role/user (must include `AmazonSageMakerFullAccess`). ### Installation Install required integrations: ```shell zenml integration install aws s3 ``` ## Authentication Methods 1. **Service Connector** (recommended): ```shell zenml service-connector register --type aws -i zenml orchestrator register --flavor=sagemaker --execution_role= zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Explicit Authentication**: ```shell zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... zenml stack register -o ... --set ``` 3. **Implicit Authentication** (via AWS config): ```shell zenml orchestrator register --flavor=sagemaker --execution_role= python run.py # Authenticates with `default` profile ``` ## Running Pipelines To execute a ZenML pipeline: ```shell python run.py ``` Output indicates the status of the pipeline run. ## Sagemaker UI Access the Sagemaker Pipelines UI via Sagemaker Studio for detailed logs and execution tracking. ## Debugging If a pipeline fails before starting, check the Sagemaker UI for error messages and logs. Use [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) for detailed log messages. ## Configuration - Default settings apply unless overridden at the pipeline or step level using `SagemakerOrchestratorSettings`. - Example of custom settings: ```python from zenml.integrations.aws.flavors.sagemaker_orchestrator_flavor import SagemakerOrchestratorSettings sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(instance_type="ml.m5.large", volume_size_in_gb=30) ``` ## S3 Data Access ### Importing Data from S3 ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(input_data_s3_mode="File", input_data_s3_uri="s3://bucket/folder") ``` ### Exporting Data to S3 ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(output_data_s3_mode="EndOfJob", output_data_s3_uri="s3://results-bucket/results") ``` ## Tagging Add tags to pipeline executions and jobs for better resource management: ```python pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project"}) ``` ## Scheduling Pipelines Configure schedules using cron expressions or fixed intervals: ```python @pipeline def my_scheduled_pipeline(): pass my_scheduled_pipeline.with_options(schedule=Schedule(cron_expression="0/5 * * * ? *"))() ``` ### IAM Permissions for Scheduling Ensure the IAM role has permissions for EventBridge Scheduler and SageMaker jobs. ## Warm Pools Enable Warm Pools to reduce startup time: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) ``` ## GPU Support Follow specific instructions to enable CUDA for GPU usage. This summary encapsulates the essential functionalities and configurations for using the AWS Sagemaker Orchestrator with ZenML, ensuring clarity and conciseness while retaining critical technical details. ================================================== === File: docs/book/component-guide/orchestrators/local.md === ### Local Orchestrator Overview The local orchestrator is a built-in component of ZenML that allows you to run pipelines locally without additional setup. #### When to Use - Ideal for beginners starting with ZenML. - Suitable for writing and debugging new pipelines quickly. #### Deployment The local orchestrator is included with ZenML and requires no extra setup. #### Usage To register and use the local orchestrator in your active stack: ```shell zenml orchestrator register --flavor=local zenml stack register -o ... --set ``` Run any ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/kubernetes.md === ### Kubernetes Orchestrator Overview The ZenML `kubernetes` integration allows orchestration and scaling of ML pipelines on Kubernetes clusters without writing Kubernetes code. It operates similarly to Kubeflow by running each pipeline step in separate Kubernetes pods, but uses a master pod for orchestration, making it faster and simpler to set up. ### When to Use Use the Kubernetes orchestrator if: - You need a lightweight solution for running pipelines on Kubernetes. - You prefer not to maintain Kubeflow Pipelines. - You want to avoid managed solutions like Vertex. ### Deployment Requirements To deploy the Kubernetes orchestrator, you need: - A Kubernetes cluster (check the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for deployment options). - A remote ZenML server connected to the cluster. - ZenML `kubernetes` integration installed: ```shell zenml integration install kubernetes ``` - Docker and `kubectl` installed. ### Usage Instructions 1. **With Service Connector**: If you have a Service Connector configured: ```shell zenml orchestrator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Without Service Connector**: If you don't have a Service Connector: ```shell zenml orchestrator register \ --flavor=kubernetes \ --kubernetes_context= zenml stack register -o ... --set ``` ### Running Pipelines To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` You can view logs and check pod status with: ```shell kubectl get pods -n zenml ``` ### Pod Interaction For debugging, you can interact with pods using labels: ```shell kubectl delete pod -n zenml -l pipeline= ``` ### Additional Configuration The orchestrator defaults to the `zenml` namespace and creates a service account. You can customize: - `kubernetes_namespace` - `service_account_name` - Pod settings (e.g., node selectors, tolerations, resources). Example configuration: ```python from zenml.integrations.kubernetes.flavors.kubernetes_orchestrator_flavor import KubernetesOrchestratorSettings kubernetes_settings = KubernetesOrchestratorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, "limits": {"cpu": "4", "memory": "8Gi"} } }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) @pipeline(settings={"orchestrator": kubernetes_settings}) def my_kubernetes_pipeline(): ... ``` ### Step-Level Settings You can define settings at the step level to override pipeline settings: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: ... ``` ### GPU Configuration To run steps on GPU, follow specific instructions to enable CUDA. For more details on configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/orchestrators.md === # Orchestrators in ZenML ## Overview The orchestrator is a critical component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that each step runs only when all necessary inputs are available. ### Key Features - **Artifact Storage**: Stores all artifacts produced during pipeline runs. - **Environment Setup**: Prepares the environment for executing pipeline steps. ### Orchestrator Flavors ZenML includes a default `local` orchestrator and supports various integrations: | Orchestrator | Flavor | Integration | Notes | |--------------------------------|-----------------|---------------|-----------------------------------------| | LocalOrchestrator | `local` | _built-in_ | Runs pipelines locally. | | LocalDockerOrchestrator | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | | KubernetesOrchestrator | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes clusters. | | KubeflowOrchestrator | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | | VertexOrchestrator | `vertex` | `gcp` | Runs pipelines in Vertex AI. | | SagemakerOrchestrator | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | | AzureMLOrchestrator | `azureml` | `azure` | Runs pipelines in AzureML. | | TektonOrchestrator | `tekton` | `tekton` | Runs pipelines using Tekton. | | AirflowOrchestrator | `airflow` | `airflow` | Runs pipelines using Airflow. | | SkypilotAWSOrchestrator | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | | SkypilotGCPOrchestrator | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | | SkypilotAzureOrchestrator | `vm_azure` | `skypilot[azure]` | Runs pipelines in Azure VMs using SkyPilot. | | HyperAIOrchestrator | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | | Custom Implementation | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell zenml orchestrator flavor list ``` ### Usage You do not need to interact directly with the orchestrator in your code. Simply ensure the orchestrator is part of your active ZenML stack and run your pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Inspecting Runs To get the URL for the orchestrator UI of a specific pipeline run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Resource Specification Specify hardware requirements for pipeline steps as needed. Refer to the documentation for details on runtime configuration and step operators. ================================================== === File: docs/book/component-guide/orchestrators/databricks.md === ### Databricks Orchestrator Overview **Databricks** is a unified data analytics platform that integrates data warehouses and lakes, optimized for big data processing and machine learning. The **Databricks orchestrator** is part of the ZenML integration, enabling the execution of ML pipelines on Databricks, leveraging its distributed computing capabilities. #### When to Use Use the Databricks orchestrator if: - You are using Databricks for data and ML workloads. - You want to utilize its distributed computing for ML pipelines. - You seek a managed solution that integrates with Databricks services. - You want to benefit from its optimization for big data processing. #### Prerequisites To use the Databricks orchestrator: - An active Databricks workspace (AWS, Azure, GCP). - A Databricks account or service account with permissions to create and run jobs. #### How It Works 1. **Wheel Package Creation**: ZenML creates a Python wheel package containing your pipeline code and dependencies. 2. **Job Definition**: ZenML uses the Databricks SDK to define a job, specifying pipeline steps and cluster settings (e.g., Spark version, number of workers). 3. **Execution**: The job retrieves the wheel package and runs the pipeline, ensuring steps execute in the correct order. 4. **Monitoring**: ZenML retrieves logs and job status post-execution. #### Usage Steps 1. **Install Integration**: ```shell zenml integration install databricks ``` 2. **Register Orchestrator**: ```shell zenml orchestrator register databricks_orchestrator --flavor=databricks --host="https://xxxxx.x.azuredatabricks.net" --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` 3. **Add to Stack**: ```shell zenml stack register databricks_stack -o databricks_orchestrator ... --set ``` 4. **Run Pipeline**: ```shell python run.py ``` #### Databricks UI Access pipeline run details via the Databricks UI. Retrieve the UI URL with: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` #### Scheduling Pipelines To schedule a pipeline: ```python from zenml.config.schedule import Schedule pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` - Only `cron_expression` is supported, using Java Timezone IDs. #### Additional Configuration Customize the Databricks orchestrator with `DatabricksOrchestratorSettings`: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-scala2.12", num_workers="3", node_type_id="Standard_D4s_v5", autoscale=(2, 3), schedule_timezone="America/Los_Angeles" ) ``` Apply settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": databricks_settings}) def my_pipeline(): ... ``` #### GPU Support Enable GPU support by using a GPU-enabled Spark version and node type: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", node_type_id="Standard_NC24ads_A100_v4", autoscale=(1, 2), ) ``` For GPU acceleration, follow additional instructions to enable CUDA. #### References - For a comprehensive list of attributes, check the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings). - For more on configuration files, refer to [this page](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). ================================================== === File: docs/book/component-guide/orchestrators/skypilot-vm.md === ### SkyPilot VM Orchestrator Overview The **SkyPilot VM Orchestrator** is a ZenML integration for managing virtual machines (VMs) across cloud providers supported by the SkyPilot framework. It simplifies running machine learning workloads in the cloud, focusing on cost efficiency, GPU availability, and managed execution. This component is intended for remote ZenML deployments only. #### When to Use Use the SkyPilot VM Orchestrator if: - You want to leverage cost-effective spot VMs and automatically select the cheapest options. - High GPU availability across multiple cloud zones/regions is required. - You prefer not to maintain Kubernetes or pay for managed solutions. #### Functionality - **Provisioning and Scaling**: Automatically launches VMs for pipelines, supporting both on-demand and managed spot VMs. - **Cost Optimization**: Includes an optimizer that selects the cheapest VM options. - **Autostop Feature**: Cleans up idle clusters to avoid unnecessary costs. - **Docker Support**: Pipeline runs are executed in Docker containers on provisioned VMs. Enable GPU support with `docker_run_args=["--gpus=all"]`. #### Deployment No special steps are needed for deployment. Ensure you have permissions to provision VMs on your chosen cloud provider and configure the orchestrator using service connectors. Supported platforms include AWS, GCP, and Azure. #### Usage Steps 1. **Install SkyPilot Integration**: - **AWS**: ```shell pip install "zenml[connectors-aws]" zenml integration install aws skypilot_aws ``` - **GCP**: ```shell pip install "zenml[connectors-gcp]" zenml integration install gcp skypilot_gcp ``` - **Azure**: ```shell pip install "zenml[connectors-azure]" zenml integration install azure skypilot_azure ``` 2. **Configure Service Connector**: Register a service connector with appropriate credentials for your cloud provider. 3. **Register Orchestrator**: ```shell zenml orchestrator register --flavor zenml orchestrator connect --connector ``` #### Configuration Options You can configure various settings for the orchestrator, including: - `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, `image_id`, `disk_size`, `disk_tier`, `cluster_name`, `idle_minutes_to_autostop`, and `docker_run_args`. **Example for AWS**: ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) def my_pipeline(): # Pipeline implementation pass ``` #### Step-Specific Resource Configuration You can configure resources for each pipeline step individually. If no specific settings are provided, the orchestrator defaults to the general settings. To disable step-based settings: ```shell zenml orchestrator update --disable_step_based_settings=True ``` **Example for Step-Specific Settings**: ```python @step(settings={"orchestrator": high_resource_settings}) def my_resource_intensive_step(): # Step implementation pass ``` ### Important Notes - The orchestrator does not support scheduling pipeline runs. - Some features may not be supported on specific cloud providers (e.g., job recovery, disk tier). - Ensure to manually tear down clusters if using Lambda Labs, as it does not support automatic teardown. For further details, refer to the [SkyPilot documentation](https://skypilot.readthedocs.io/en/latest/index.html) and ZenML SDK documentation. ================================================== === File: docs/book/component-guide/orchestrators/azureml.md === # AzureML Orchestrator Summary ## Overview AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire machine learning lifecycle, from data preparation to deployment and monitoring. ## Use Cases Use AzureML if you: - Are using Azure. - Need a production-grade orchestrator. - Want a UI to track pipeline runs. - Prefer a managed solution for running pipelines. ## Implementation The ZenML AzureML orchestrator uses the AzureML Python SDK v2 to build machine learning pipelines. Each ZenML step translates to an AzureML CommandComponent. ## Deployment To deploy the AzureML orchestrator: 1. Deploy ZenML to the cloud (preferably in the same region as AzureML). 2. Connect to the remote ZenML server. ### Quick Deployment Options - Use the in-browser stack deployment wizard or the ZenML Azure Terraform module for a streamlined setup. ## Prerequisites To use the AzureML orchestrator: - Install the ZenML `azure` integration: ```shell zenml integration install azure ``` - Ensure Docker is installed and running. - Set up a remote artifact store and container registry. - Create an Azure resource group with an AzureML workspace. ### Authentication Methods 1. **Default Authentication**: Combines Azure hosting and local development credentials. 2. **Service Principal Authentication (recommended)**: Requires creating a service principal on Azure and registering a ZenML Azure Service Connector: ```bash zenml service-connector register --type azure -i zenml orchestrator connect -c ``` ## Docker Integration ZenML builds a Docker image for each pipeline run, named `/zenml:`. ## AzureML UI The Azure Machine Learning studio allows inspection, management, and debugging of pipelines. Double-clicking steps opens their configuration and execution logs. ## Configuration Use `AzureMLOrchestratorSettings` to configure compute resources for pipeline execution. Three modes are available: 1. **Serverless Compute (Default)**: ```python azureml_settings = AzureMLOrchestratorSettings(mode="serverless") ``` 2. **Compute Instance**: ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-instance", compute_name="my-gpu-instance", size="Standard_NC6s_v3", idle_time_before_shutdown_minutes=20, ) ``` 3. **Compute Cluster**: ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-cluster", compute_name="my-gpu-cluster", size="Standard_NC6s_v3", tier="Dedicated", min_instances=2, max_instances=10, idle_time_before_scaledown_down=60, ) ``` ## Scheduling Pipelines AzureML supports scheduling pipelines using JobSchedules with cron expressions or intervals: ```python @pipeline def my_pipeline(): ... my_pipeline = my_pipeline.with_options( schedule=Schedule(cron_expression="*/5 * * * *") ) my_pipeline() ``` After scheduling, manage the lifecycle of the schedule through the Azure UI. ================================================== === File: docs/book/component-guide/orchestrators/tekton.md === # Tekton Orchestrator Overview **Tekton** is an open-source framework for CI/CD systems, enabling developers to build, test, and deploy across various environments. It is designed for use within a **remote ZenML deployment** scenario. ## When to Use Tekton Use the Tekton orchestrator if: - You need a production-grade orchestrator. - You want a UI for tracking pipeline runs. - You are comfortable with Kubernetes setup and maintenance. - You can deploy and maintain Tekton Pipelines on your cluster. ## Deployment Steps 1. **Set up a Kubernetes Cluster**: Choose your cloud provider (AWS, GCP, Azure) and follow the respective setup instructions. 2. **Install Tekton Pipelines**: After configuring your Kubernetes cluster, install Tekton Pipelines. ### AWS Example - Ensure you have an EKS cluster and AWS CLI set up. - Configure `kubectl`: ```bash aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` - Install Tekton Pipelines. ### GCP Example - Ensure you have a GKE cluster and Google Cloud CLI set up. - Configure `kubectl`: ```bash gcloud container clusters get-credentials CLUSTER_NAME ``` - Install Tekton Pipelines. ### Azure Example - Ensure you have an AKS cluster and Azure CLI set up. - Configure `kubectl`: ```bash az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` - Install Tekton Pipelines. **Note**: Ensure Tekton Pipelines version is >=0.38.3. ## Usage Steps 1. Install the ZenML `tekton` integration: ```bash zenml integration install tekton -y ``` 2. Ensure Docker is installed and running. 3. Deploy Tekton pipelines on a remote cluster. 4. Obtain the Kubernetes context name: ```bash kubectl config get-contexts ``` ### Registering the Orchestrator - **With Service Connector**: ```bash zenml orchestrator register --flavor tekton zenml orchestrator connect --connector zenml stack register -o ... --set ``` - **Without Service Connector**: ```bash zenml orchestrator register --flavor=tekton --kubernetes_context= zenml stack register -o ... --set ``` ### Running a Pipeline Run any ZenML pipeline using: ```bash python file_that_runs_a_zenml_pipeline.py ``` ## Tekton UI Access the Tekton UI for pipeline run details: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` ## Additional Configuration Configure `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings tekton_settings = TektonOrchestratorSettings( pod_settings={ "affinity": {...}, "tolerations": [...] } ) ``` Specify resource settings for pipeline steps: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` Apply settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_pipeline(): ... @step(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_step(): ... ``` ## GPU Configuration For GPU usage, follow the instructions to enable CUDA for full acceleration. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/) for configurable attributes and settings. ================================================== === File: docs/book/component-guide/orchestrators/kubeflow.md === ### Kubeflow Orchestrator Overview The Kubeflow orchestrator is a ZenML integration that utilizes Kubeflow Pipelines to run pipelines. It is designed for remote ZenML deployments and is not recommended for local setups. ### When to Use Use the Kubeflow orchestrator if: - You need a production-grade orchestrator. - You want a UI to track pipeline runs. - You are comfortable with Kubernetes setup and maintenance. - You can deploy and maintain Kubeflow Pipelines. ### Deployment Steps To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. The process varies by cloud provider: #### AWS 1. Set up an EKS cluster. 2. Install AWS CLI and configure it: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. (Optional) Set up an AWS Service Connector for secure access. #### GCP 1. Set up a GKE cluster. 2. Install Google Cloud CLI and configure it: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. (Optional) Set up a GCP Service Connector. #### Azure 1. Set up an AKS cluster. 2. Install Azure CLI and configure it: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. **Note**: Adjust `containerRuntimeExecutor` in the workflow controller's ConfigMap for compatibility with `containerd`. #### Other Kubernetes 1. Set up a Kubernetes cluster. 2. Install `kubectl` and configure it. 3. Install Kubeflow Pipelines. 4. (Optional) Set up a Kubernetes Service Connector. ### Usage Requirements To use the Kubeflow orchestrator: - A Kubernetes cluster with Kubeflow Pipelines. - A remote ZenML server. - Install the ZenML `kubeflow` integration: ```shell zenml integration install kubeflow ``` - Docker installed (unless using a remote Image Builder). - (Optional) `kubectl` installed. ### Registering the Orchestrator You can register the orchestrator in two ways: 1. **With a Service Connector**: ```shell zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator register --flavor kubeflow --connector --resource-id zenml stack register -o -a -c ``` 2. **Without a Service Connector**: ```shell zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack register -o -a -c ``` ### Running Pipelines To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Kubeflow UI Access the Kubeflow UI for pipeline run details: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` ### Additional Configuration You can customize the Kubeflow orchestrator using `KubeflowOrchestratorSettings`: - `client_args`: Arguments for the KFP client. - `user_namespace`: Namespace for experiments and runs. - `pod_settings`: Node selectors, affinity, and tolerations. Example: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={ "affinity": {...}, "tolerations": [...] } ) ``` ### Multi-Tenancy Deployment For multi-tenancy, include the `kubeflow_hostname` parameter when registering: ```shell zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Use the following settings for authentication: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="{{kubeflow_secret.username}}", client_password="{{kubeflow_secret.password}}", user_namespace="namespace_name" ) ``` ### Using Secrets Create secrets for sensitive information: ```shell zenml secret create kubeflow_secret --username=admin --password=abc123 ``` ### Important Notes - Ensure the Kubernetes service is named `ml-pipeline`. - If deployments are not running, increase the number of nodes. - For GPU support, follow specific instructions for CUDA. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/vertex.md === # Google Cloud Vertex AI Orchestrator ## Overview Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) designed for running production-ready, repeatable workflows with minimal setup. It is intended for use within a remote ZenML deployment. ## When to Use Use the Vertex orchestrator if: - You are using GCP. - You need a production-grade orchestrator with a UI for tracking pipeline runs. - You prefer a managed, serverless solution for your pipelines. ## Deployment To deploy the Vertex AI orchestrator: 1. Deploy ZenML to the cloud, ideally in the same GCP project as the Vertex infrastructure. 2. Ensure connection to the remote ZenML server. 3. Enable relevant Vertex APIs on the GCP project. ## Requirements To use the Vertex orchestrator, you need: - ZenML `gcp` integration installed: ```shell zenml integration install gcp ``` - Docker installed and running. - A remote artifact store and container registry. - GCP credentials with appropriate permissions. - GCP project ID and location for running pipelines. ### GCP Credentials and Permissions You need a GCP user account or service accounts with the following options for authentication: 1. Use the `gcloud` CLI for local authentication. 2. Configure the orchestrator with a service account key file. 3. (Recommended) Use a GCP Service Connector for better security and management. ### Vertex AI Pipeline Components 1. **ZenML Client Environment**: Where ZenML code runs, requiring permissions to create jobs in Vertex Pipelines. 2. **Vertex AI Pipeline Environment**: Runs pipeline steps using a workload service account, which can be configured or default to the Compute Engine service account. ### Configuration Use-Cases 1. **Local `gcloud` CLI**: ```shell zenml orchestrator register \ --flavor=vertex \ --project= \ --location= \ --synchronous=true ``` 2. **GCP Service Connector with Single Service Account**: ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic zenml orchestrator register \ --flavor=vertex \ --location= \ --synchronous=true \ --workload_service_account=@.iam.gserviceaccount.com zenml orchestrator connect --connector ``` 3. **GCP Service Connector with Different Service Accounts**: - Set up multiple service accounts with minimum required permissions. - Register the service connector and orchestrator similarly as above. ### Configuring the Stack To register and activate a stack with the orchestrator: ```shell zenml stack register -o ... --set ``` ### Running Pipelines Run any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Vertex UI Access pipeline run details and logs via the Vertex UI. Get the URL programmatically: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` ### Scheduling Pipelines Schedule pipelines using: ```python from datetime import datetime, timedelta from zenml import pipeline from zenml.config.schedule import Schedule @pipeline def first_pipeline(): ... first_pipeline = first_pipeline.with_options( schedule=Schedule(cron_expression="*/5 * * * *") ) first_pipeline() @pipeline def second_pipeline(): ... second_pipeline = second_pipeline.with_options( schedule=Schedule( cron_expression="0 * * * *", start_time=datetime.now() + timedelta(days=1), end_time=datetime.now() + timedelta(days=3), ) ) second_pipeline() ``` **Note**: The orchestrator only supports `cron_expression`, `start_time`, and `end_time` parameters. ### Additional Configuration Configure labels and resource settings: ```python from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings from zenml.config import ResourceSettings vertex_settings = VertexOrchestratorSettings(labels={"key": "value"}) resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` For GPU usage: ```python vertex_settings = VertexOrchestratorSettings( pod_settings={"node_selectors": {"cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"}} ) resource_settings = ResourceSettings(gpu_count=1) ``` ### Enabling CUDA for GPU Follow specific instructions to enable CUDA for GPU-backed hardware to ensure full acceleration. For more detailed configurations and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.orchestrators.vertex_orchestrator.VertexOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === ### Develop a Custom Orchestrator #### Overview To develop a custom orchestrator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Implementation ZenML provides the `BaseOrchestrator` class, which abstracts ZenML-specific details and offers a simplified interface: ```python from abc import ABC, abstractmethod from typing import Any, Dict, Type from zenml.models import PipelineDeploymentResponseModel from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig, Stack, Flavor class BaseOrchestratorConfig(StackComponentConfig): """Base class for all ZenML orchestrator configurations.""" class BaseOrchestrator(StackComponent, ABC): @abstractmethod def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> Any: """Prepares and runs the pipeline or returns an intermediate representation.""" @abstractmethod def get_orchestrator_run_id(self) -> str: """Returns the unique run ID for the active orchestrator run.""" class BaseOrchestratorFlavor(Flavor): @property @abstractmethod def name(self): """Returns the name of the flavor.""" @property def type(self) -> StackComponentType: return StackComponentType.ORCHESTRATOR @property def config_class(self) -> Type[BaseOrchestratorConfig]: return BaseOrchestratorConfig @property @abstractmethod def implementation_class(self) -> Type["BaseOrchestrator"]: """Implementation class for this flavor.""" ``` #### Creating a Custom Orchestrator 1. Inherit from `BaseOrchestrator` and implement `prepare_or_run_pipeline(...)` and `get_orchestrator_run_id()`. 2. If needed, create a config class inheriting from `BaseOrchestratorConfig`. 3. Combine both by inheriting from `BaseOrchestratorFlavor`, providing a `name`. Register your orchestrator flavor via CLI: ```shell zenml orchestrator flavor register ``` Example registration: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository. - The `CustomOrchestratorFlavor` is used during flavor creation, while `CustomOrchestratorConfig` is used during registration. #### Implementation Guide 1. **Create your orchestrator class:** Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. 2. **Implement `prepare_or_run_pipeline(...)`:** Convert the pipeline to a format understood by your orchestration tool and run it. 3. **Implement `get_orchestrator_run_id()`:** Return a unique ID for each pipeline run. #### Optional Features - **Scheduling:** Handle `deployment.schedule` if supported, otherwise log a warning or raise an exception. - **Resource Specification:** Manage resources like CPUs or GPUs using `step.config.resource_settings`. #### Code Sample ```python from typing import Dict from zenml.entrypoints import StepEntrypointConfiguration from zenml.models import PipelineDeploymentResponseModel from zenml.orchestrators import ContainerizedOrchestrator from zenml.stack import Stack class MyOrchestrator(ContainerizedOrchestrator): def get_orchestrator_run_id(self) -> str: ... def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> None: if deployment.schedule: ... for step_name, step in deployment.step_configurations.items(): image = self.get_image(deployment, step_name) command = StepEntrypointConfiguration.get_entrypoint_command() arguments = StepEntrypointConfiguration.get_entrypoint_arguments(step_name, deployment.id) ... ``` #### Enabling CUDA for GPU Hardware To run steps on a GPU, follow the [instructions for GPU training](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for acceleration. ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === # Debugging ZenML Issues: A Quick Guide This guide provides best practices for debugging common issues with ZenML and obtaining help. ### When to Seek Help Before asking for assistance, check the following: - Search Slack, GitHub issues, and the ZenML documentation. - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). If you still need help, post your question on [Slack](https://zenml.io/slack). ### Posting on Slack When posting, include: 1. **System Information**: Run the command: ```shell zenml info -a -s ``` For specific packages, use: ```shell zenml info -p ``` 2. **What Happened**: Describe your goal, expectations, and actual results. 3. **Reproduction Steps**: Provide a step-by-step guide to reproduce the error. 4. **Relevant Log Output**: Attach relevant logs and error tracebacks. Include outputs from: ```shell zenml status zenml stack describe ``` ### Additional Logs If default logs are insufficient, increase verbosity by setting: ```shell export ZENML_LOGGING_VERBOSITY=DEBUG ``` Refer to documentation for setting environment variables on different OS. ### Client and Server Logs To view server logs, run: ```shell zenml logs ``` ### Common Errors 1. **Error initializing rest store**: ```bash RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237': ... ``` Solution: Re-run `zenml login --local` after each machine restart. 2. **Column 'step_configuration' cannot be null**: ```bash sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") ``` Solution: Ensure step configurations do not exceed limits. 3. **'NoneType' object has no attribute 'name'**: ```shell AttributeError: 'NoneType' object has no attribute 'name' ``` Solution: Register an experiment tracker: ```shell zenml experiment-tracker register mlflow_tracker --flavor=mlflow zenml stack update -e mlflow_tracker ``` This guide aims to streamline the debugging process and enhance communication when seeking help. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === # ZenML Secrets Documentation Summary ## Overview of ZenML Secrets ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. ## Creating Secrets ### CLI Method To create a secret named `` with key-value pairs: ```shell zenml secret create --= --= ``` Alternatively, use JSON or YAML format: ```shell zenml secret create --values='{"key1":"value2","key2":"value2"}' ``` For interactive creation: ```shell zenml secret create -i ``` For large values or special characters, read from a file: ```bash zenml secret create --key=@path/to/file.txt ``` You can also list, update, and delete secrets using the CLI. Refer to the full CLI guide [here](https://sdkdocs.zenml.io/latest/cli/#zenml.cli--secrets-management). ### Python SDK Method Using the ZenML client API: ```python from zenml.client import Client client = Client() client.create_secret( name="my_secret", values={"username": "admin", "password": "abc123"} ) ``` Other methods include `get_secret`, `update_secret`, `list_secrets`, and `delete_secret`. Full API reference [here](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/). ## Scoping Secrets Secrets can be scoped to a user, making them accessible only to that user. By default, secrets are scoped to the active user. To create a user-scoped secret: ```shell zenml secret create --scope user --= ``` ## Accessing Secrets ### Secret References To configure stack components with sensitive information, use secret references in the format `{{.}}`. For example: ```shell zenml secret create mlflow_secret --username=admin --password=abc123 zenml experiment-tracker register mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` ZenML validates the existence of secrets and keys before running a pipeline. You can control validation with the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. - `SECRET_AND_KEY_EXISTS`: Validates both (default). ### Fetching Secret Values in Steps To access secrets in steps using the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` This summary provides essential information on creating, scoping, and accessing ZenML secrets, ensuring secure management of sensitive data in your workflows. ================================================== === File: docs/book/how-to/project-setup-and-management/README.md === # Project Setup and Management This section outlines the essential steps for setting up and managing ZenML projects. ## Key Steps: 1. **Installation**: - Install ZenML using pip: ```bash pip install zenml ``` 2. **Creating a Project**: - Initialize a new ZenML project: ```bash zenml init ``` 3. **Configuration**: - Configure your project settings, including specifying the orchestrator, artifact store, and metadata store. 4. **Version Control**: - Use Git for version control to manage project changes and collaborate effectively. 5. **Pipeline Management**: - Create and manage pipelines using the ZenML pipeline decorator: ```python @pipeline def my_pipeline(): ... ``` 6. **Running Pipelines**: - Execute pipelines through the command line or programmatically. 7. **Monitoring and Logging**: - Implement logging for tracking pipeline execution and debugging. 8. **Documentation**: - Maintain comprehensive documentation for project structure, dependencies, and usage guidelines. By following these steps, users can effectively set up and manage their ZenML projects, ensuring a streamlined workflow and collaboration. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/access-management.md === # Access Management and Roles in ZenML This guide outlines the management of user roles and responsibilities in ZenML, emphasizing the importance of access management for security and efficiency. ## Typical Roles in an ML Project - **Data Scientists**: Develop and run pipelines. - **MLOps Platform Engineers**: Manage infrastructure and stack components. - **Project Owners**: Oversee ZenML deployment and user access. Roles may vary in your project, but responsibilities remain similar. ### Role Creation You can create roles in ZenML Pro with specific permissions and assign them to Users or Teams. [Sign up for a free trial](https://cloud.zenml.io/). ## Service Connectors Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors due to their infrastructure expertise. ### Example Permissions - **Data Scientist Role**: Can use connectors to create stack components and run pipelines but cannot create, update, or delete connectors or access their credentials. - **MLOps Platform Engineer Role**: Can create, update, delete connectors, and read their secret values. ### RBAC Features RBAC is available in ZenML Pro. Learn more about roles [here](../../../getting-started/zenml-pro/roles.md). ## Server Upgrade Responsibilities - **Decision**: Typically made by Project Owners after team consultations. - **Execution**: MLOps Platform Engineers are responsible for performing upgrades, ensuring data backup, and minimizing service disruption. Read best practices for upgrading in the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md). ## Pipeline Migration and Maintenance - **Ownership**: Data Scientists own pipeline code; MLOps Engineers ensure compatibility with new ZenML versions. - **Testing**: Conduct testing in a safe environment to avoid impacting existing workflows. Refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md) for more details. ## Best Practices for Access Management - **Regular Audits**: Periodically review user access and permissions. - **RBAC**: Implement RBAC for streamlined permission management. - **Least Privilege**: Grant minimal necessary permissions. - **Documentation**: Maintain clear records of roles and access policies. RBAC and permission assignment are exclusive to ZenML Pro users. By adhering to these guidelines, you can maintain a secure and collaborative ZenML environment. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md === # Shared Libraries and Logic for Teams ## Overview This guide outlines how teams can share code and libraries using ZenML to enhance collaboration, standardization, and robustness across projects. It covers what can be shared and how to distribute shared components. ## What Can Be Shared ZenML supports several custom components for sharing: ### Custom Flavors 1. Create a custom flavor in a shared repository. 2. Implement the component as per ZenML documentation. 3. Register the component using the ZenML CLI: ```bash zenml artifact-store flavor register ``` ### Custom Steps Custom steps can be created in a separate repository and referenced like standard Python modules. ### Custom Materializers 1. Create the materializer in a shared repository. 2. Implement it as described in ZenML documentation. 3. Team members can import and use it in their projects. ## How to Distribute Shared Components ### Shared Private Wheels This method packages Python code for internal distribution. #### Benefits - Easy installation via pip - Simplified version and dependency management - Privacy with internal PyPI hosting #### Setup Steps 1. Create a private PyPI server (e.g., AWS CodeArtifact). 2. Build code into wheel format. 3. Upload the wheel to the private PyPI server. 4. Configure pip to include the private server. 5. Install packages using pip. ### Using Shared Libraries with `DockerSettings` When using remote orchestrators, specify shared libraries in the `Dockerfile` using `DockerSettings`. #### Example: Installing Shared Libraries Using a list of requirements: ```python import os from zenml.config import DockerSettings from zenml import pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` Using a requirements file: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` The `requirements.txt` should include: ``` --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ my-simple-package==0.1.0 ``` ## Best Practices - **Version Control**: Use Git for shared code repositories to facilitate collaboration. - **Access Controls**: Implement security measures for private PyPI servers. - **Documentation**: Maintain clear and comprehensive documentation for shared components. - **Regular Updates**: Keep shared libraries updated and communicate changes to the team. - **Continuous Integration**: Set up CI for shared libraries to ensure quality and compatibility. By following these guidelines, teams can effectively share code and libraries, enhancing collaboration and accelerating development within the ZenML framework. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md === # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML ## Overview ZenML's architecture consists of **Stacks**, **Pipelines**, **Models**, and **Artifacts**, which are essential for organizing your ML projects effectively. ### Key Concepts: - **Stacks**: Configuration of tools and infrastructure for running pipelines, including components like orchestrators and artifact stores. Stacks enable consistent environments across local, staging, and production setups. - **Pipelines**: Sequences of tasks in the ML workflow, automating processes like data preparation and model training. Modular pipelines facilitate independent execution and easier management. - **Models**: Collections of related pipelines, artifacts, and metadata, serving as a workspace for specific projects. Models help transfer data between pipelines. - **Artifacts**: Outputs from pipeline steps that are tracked and reused, ensuring traceability and organization. ## Stacks - A stack is a reusable execution environment for multiple pipelines. - Benefits include reduced configuration overhead, consistent environments, and minimized error risks. - Learn more in the [Managing Stacks and Components](../../infrastructure-deployment/stack-deployment/README.md) guide. ## Organizing Pipelines, Models, and Artifacts ### Pipelines - Separate pipelines for different tasks (e.g., training vs. inference) enhance modularity. - Benefits include independent execution, easier code management, and better organization of runs. ### Models - Use Models to connect pipelines and facilitate data transfer. - The Model Control Plane manages model versions and stages. ### Artifacts - Artifacts should be named clearly for easy identification. - Each pipeline run creates a new version of an artifact, ensuring a clear history. ## Example Workflow 1. **Team Setup**: Bob and Alice create three pipelines: feature engineering, model training, and inference. 2. **Local Development**: They use a shared `default` stack for local testing. 3. **Artifact Management**: Bob’s training pipeline produces model artifacts that Alice’s inference pipeline uses. 4. **Model Control**: Bob tracks model versions with the Model Control Plane, promoting the best version for Alice's use. ## Rules of Thumb ### Models - One Model per use-case. - Group related pipelines and artifacts. - Use the Model Control Plane for version management. ### Stacks - Separate stacks for different environments. - Share production/staging stacks for consistency. - Keep local stacks simple. ### Naming and Organization - Consistent naming conventions. - Use tags for resource organization. - Document configurations and dependencies. - Maintain modular and reusable pipeline code. Following these guidelines will help maintain an organized and scalable MLOps workflow in ZenML. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === It seems that the documentation text you intended to provide is missing. Please share the text you would like me to summarize, and I'll be happy to assist you! ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md === ### Creating Your Own ZenML Template Creating a ZenML template standardizes and shares ML workflows across projects. ZenML utilizes [Copier](https://copier.readthedocs.io/en/stable/) for template management, allowing project generation from templates. #### Steps to Create a ZenML Template: 1. **Create a Repository**: Store your template's code and configuration files in a new repository. 2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a base to define your ML steps and pipelines. 3. **Create `copier.yml`**: This file specifies template parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. 4. **Test Your Template**: Use the following command to generate a new project from your template: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` 5. **Use Your Template with ZenML**: Initialize your ZenML project with: ```bash zenml init --template https://github.com/your-username/your-template.git ``` For a specific version, use: ```bash zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` #### Additional Notes: - Keep your template updated with best practices. - For practical examples, install the `e2e_batch` template using: ```bash mkdir e2e_batch cd e2e_batch zenml init --template e2e_batch --template-with-defaults ``` This process enables you to quickly set up new ML projects using your custom ZenML template. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md === ### ZenML Project Templates Overview ZenML provides project templates to help users quickly understand the framework and build ML pipelines. These templates cover major use cases and include a simple CLI. #### Available Project Templates | Project Template [Short Name] | Tags | Description | |-------------------------------|------|-------------| | [Starter Template](https://github.com/zenml-io/template-starter) [**starter**] | `basic`, `scikit-learn` | Basic ML components for starting with ZenML: parameterized steps, model training pipeline, flexible configuration, and a simple CLI, using scikit-learn. | | [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [**e2e_batch**] | `etl`, `hp-tuning`, `model-promotion`, `drift-detection`, `batch-prediction`, `scikit-learn` | Two pipelines covering data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion to production, data drift detection, and batch inference. | | [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [**nlp**] | `nlp`, `hp-tuning`, `model-promotion`, `training`, `pytorch`, `gradio`, `huggingface` | Simple NLP pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, with local testing using Gradio. | #### Collaboration Invitation ZenML invites users to share their projects for potential inclusion as templates. Interested users can join the [ZenML Slack](https://zenml.io/slack/) for collaboration. #### Using a Project Template To use the templates, install ZenML with the templates extras: ```bash pip install zenml[templates] ``` **Note:** These templates differ from 'Run Templates' used for triggering pipelines. More information on Run Templates can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). To generate a project from a template, use: ```bash zenml init --template # Example: zenml init --template e2e_batch ``` For default values, add the `--template-with-defaults` flag: ```bash zenml init --template --template-with-defaults # Example: zenml init --template e2e_batch --template-with-defaults ``` ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/set-up-repository.md === ### Recommended Repository Structure and Best Practices for ZenML #### Project Structure A recommended structure for ZenML projects is as follows: ```markdown . ├── .dockerignore ├── Dockerfile ├── steps │ ├── loader_step │ │ ├── loader_step.py │ │ └── requirements.txt (optional) │ └── training_step ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py │ │ └── requirements.txt (optional) │ └── deployment_pipeline ├── notebooks │ └── *.ipynb ├── requirements.txt ├── .zen └── run.py ``` - **Steps**: Keep each step in separate Python files for better organization. - **Pipelines**: Similarly, keep pipelines in separate files and avoid naming conflicts with the term "pipeline" to prevent overwriting the decorator. #### Logging Use the `logging` module to record logs, which will be captured in the ZenML dashboard: ```python from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader(): logger.info("My logs") ``` #### .dockerignore Exclude unnecessary files (e.g., data, virtual environments) in the `.dockerignore` to optimize Docker image size and build time. #### Dockerfile (Optional) ZenML uses an official Docker image by default. You can provide your own `Dockerfile` to customize this behavior. #### Notebooks Organize all Jupyter notebooks in a dedicated folder. #### .zen Run `zenml init` at the project root to define the project scope. This is particularly important for Jupyter notebooks to ensure proper import paths. #### run.py Place pipeline runners in the root directory to ensure all imports resolve correctly. If no `.zen` is defined, this file also implicitly sets the source's root. ### Additional Notes - Registering your repository as a code repository helps ZenML track code versions and can speed up Docker image builds. - Ensure import paths are relative to the source's root. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md === ### Summary: Connecting Your Git Repository in ZenML #### Overview Connecting a code repository to ZenML allows for version tracking of your code and can optimize Docker image builds by avoiding unnecessary rebuilds when source files change. Supported platforms include GitHub and GitLab, with options for custom implementations. #### Registering a Code Repository 1. **Install Integration**: ```bash zenml integration install ``` 2. **Register Repository**: ```bash zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] ``` #### Available Implementations - **GitHub**: - Install integration: ```bash zenml integration install github ``` - Register repository: ```bash zenml code-repository register --type=github \ --owner= --repository= --token= ``` - For self-hosted GitHub, include: ```bash --api_url= --host= ``` - **GitLab**: - Install integration: ```bash zenml integration install gitlab ``` - Register repository: ```bash zenml code-repository register --type=gitlab \ --group= --project= --token= ``` - For self-hosted GitLab, include: ```bash --instance_url= --host= ``` #### Token Management - Use ZenML's secret management to store tokens securely: ```bash zenml secret create --pa_token= zenml code-repository register ... --token={{.pa_token}} ``` #### Custom Code Repository To implement a custom repository: 1. Subclass `zenml.code_repositories.BaseCodeRepository` and implement required methods: ```python class BaseCodeRepository(ABC): @abstractmethod def login(self) -> None: pass @abstractmethod def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: pass @abstractmethod def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: pass ``` 2. Register the custom repository: ```bash zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] ``` This setup allows ZenML to track code versions and streamline pipeline execution by leveraging the capabilities of connected code repositories. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/README.md === # Setting up a Well-Architected ZenML Project ## Overview This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration, essential for successful machine learning operations (MLOps). ## Key Components ### Repository Structure - Organize folders for pipelines, steps, and configurations. - Maintain clear separation of concerns and consistent naming conventions. - Refer to the [Set up repository guide](./set-up-repository.md) for details. ### Version Control and Collaboration - Integrate with Git for efficient collaboration and code management. - Enables faster pipeline builds by reusing images and code. - Learn more in the [Set up a repository guide](./set-up-repository.md). ### Stacks, Pipelines, Models, and Artifacts - **Stacks**: Define infrastructure and tools. - **Models**: Represent ML models and metadata. - **Pipelines**: Encapsulate ML workflows. - **Artifacts**: Track data and model outputs. - See the [Organizing Stacks, Pipelines, Models, and Artifacts guide](../collaborate-with-team/stacks-pipelines-models.md) for organization strategies. ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). - Set up [service connectors](../../infrastructure-deployment/auth-management/README.md) for authorization. - Use [Teams in ZenML Pro](../../../getting-started/zenml-pro/teams.md) for role assignment. - Explore strategies in the [Access Management and Roles guide](../collaborate-with-team/access-management.md). ### Shared Components and Libraries - Promote code reuse with shared components like custom flavors and steps. - Use shared private wheels for internal distribution. - Find details in the [Shared Libraries and Logic for Teams guide](../collaborate-with-team/shared-components-for-teams.md). ### Project Templates - Utilize pre-made or custom templates for consistency. - Learn about templates in the [Project Templates guide](../collaborate-with-team/project-templates/README.md). ### Migration and Maintenance - Develop strategies for migrating legacy code and upgrading ZenML servers. - Refer to the [Migration and Maintenance guide](../../advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md#upgrading-your-code) for best practices. ## Getting Started Explore the guides in this section for detailed information on project setup and management. Regularly review and refine your project to adapt to evolving needs, leveraging ZenML's features for a robust MLOps environment. ================================================== === File: docs/book/how-to/model-management-metrics/README.md === # Model Management and Metrics in ZenML This section outlines the processes for managing machine learning models and tracking their performance metrics within ZenML. ## Key Components: 1. **Model Management**: - ZenML provides tools to register, version, and deploy models. - Supports integration with various model registries for seamless management. 2. **Metrics Tracking**: - Metrics can be logged during training and evaluation phases. - ZenML allows for custom metrics to be defined and tracked. 3. **Versioning**: - Each model can be versioned to ensure reproducibility. - Versioning helps in tracking changes and comparing model performance over time. 4. **Integration**: - ZenML integrates with popular ML frameworks and tools, facilitating easy model management and metric tracking. - Supports cloud and on-premise deployments. 5. **Best Practices**: - Regularly update model versions and metrics. - Utilize automated pipelines for consistent tracking and management. By leveraging these features, users can efficiently manage their ML models and monitor their performance metrics throughout the lifecycle of their projects. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-an-artifact.md === ### Summary: Attaching Metadata to Artifacts in ZenML **Overview**: In ZenML, metadata enriches artifacts (outputs from pipeline steps) with context such as size and performance metrics, accessible via the ZenML dashboard for easier tracking and comparison. #### Logging Metadata for Artifacts To log metadata, use the `log_metadata` function with the artifact's name, version, or ID. The metadata can include any JSON-serializable value, including ZenML types like `Uri`, `Path`, `DType`, and `StorageSize`. **Example**: ```python import pandas as pd from zenml import step, log_metadata from zenml.metadata.metadata_types import StorageSize @step def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: processed_dataframe = ... log_metadata( metadata={ "row_count": len(processed_dataframe), "columns": list(processed_dataframe.columns), "storage_size": StorageSize(processed_dataframe.memory_usage().sum()) }, infer_artifact=True, ) return processed_dataframe ``` #### Selecting the Artifact for Metadata Logging 1. **Using `infer_artifact`**: Automatically selects the output artifact of the step. 2. **Name and Version**: Specify both to target a specific artifact version. 3. **Artifact Version ID**: Directly fetches and attaches metadata to the specified version. #### Fetching Logged Metadata Retrieve logged metadata using the ZenML Client: ```python from zenml.client import Client client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` *Note: The fetched value reflects the latest entry for the specified key.* #### Grouping Metadata in the Dashboard To organize metadata into cards in the ZenML dashboard, pass a dictionary of dictionaries to the `metadata` parameter: **Example**: ```python log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="version", ) ``` In the dashboard, `model_metrics` and `data_details` will appear as separate cards. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-run.md === ### Attach Metadata to a Run in ZenML In ZenML, you can log metadata to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Run When logging metadata from within a pipeline step, the `log_metadata` function attaches the metadata to the current run, using the pattern `step_name::metadata_key`. This allows for consistent key usage across different steps. **Example: Logging Metadata in a Step** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True) ]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... # Log run-level metadata log_metadata( metadata={ "run_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } } ) return classifier ``` #### Manually Logging Metadata You can also log metadata to a specific pipeline run using the run ID, which is useful for post-execution metrics. **Example: Manual Metadata Logging** ```python from zenml import log_metadata log_metadata( metadata={"post_run_info": {"some_metric": 5.0}}, run_id_name_or_prefix="run_id_name_or_prefix" ) ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: **Example: Fetching Metadata** ```python from zenml.client import Client client = Client() run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` > **Note:** Fetching metadata with a specific key returns the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-model.md === ### Summary of Attaching Metadata to a Model in ZenML ZenML enables logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, or customer-specific details, aiding in the management and interpretation of model usage and performance across versions. #### Logging Metadata for Models To log metadata, use the `log_metadata` function, which attaches key-value pairs to a model, including metrics and JSON-serializable values like custom ZenML types (`Uri`, `Path`, `StorageSize`). **Example: Logging Metadata** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: """Train a model and log model metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata( metadata={ "evaluation_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } }, infer_model=True, ) return classifier ``` In this example, metadata is linked to the model rather than the classifier artifact, useful for summarizing various pipeline steps. #### Selecting Models with `log_metadata` ZenML offers flexible options for attaching metadata to model versions: 1. **Using `infer_model`**: Infers the model from the step context. 2. **Model Name and Version**: Attaches metadata to a specific model version if provided. 3. **Model Version ID**: Directly attaches metadata to the specified model version. #### Fetching Logged Metadata To retrieve attached metadata, use the ZenML Client: ```python from zenml.client import Client client = Client() model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"]) ``` **Note**: When fetching metadata by key, the returned value reflects the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/grouping-metadata.md === ### Grouping Metadata in the Dashboard To organize metadata in the ZenML dashboard, you can pass a dictionary of dictionaries in the `metadata` parameter. This allows for logical grouping into cards, enhancing visualization and comprehension. #### Example Code: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="my_artifact_version", ) ``` In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as separate cards, each with their respective key-value pairs. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md === ### Summary: Tracking Your Metadata with ZenML ZenML supports special metadata types for capturing specific information. Key types include: - **Uri**: Represents a dataset source URI. - **Path**: Specifies the filesystem path to a script. - **DType**: Describes data types of specific columns. - **StorageSize**: Indicates the size of processed data in bytes. #### Example Usage: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path log_metadata( metadata={ "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), "preprocessing_script": Path("/scripts/preprocess.py"), "column_types": { "age": DType("int"), "income": DType("float"), "score": DType("int") }, "processed_data_size": StorageSize(2500000) }, ) ``` This example demonstrates how to log metadata using these special types, ensuring consistency and interpretability in metadata formatting. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md === ### Fetch Metadata During Pipeline Composition #### Pipeline Configuration with `PipelineContext` To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. **Example Code:** ```python from zenml import get_pipeline_context, pipeline @pipeline( extra={ "complex_parameter": [ ("sklearn.tree", "DecisionTreeClassifier"), ("sklearn.ensemble", "RandomForestClassifier"), ] } ) def my_pipeline(): context = get_pipeline_context() after = [] search_steps_prefix = "hp_tuning_search_" for i, model_search_configuration in enumerate(context.extra["complex_parameter"]): step_name = f"{search_steps_prefix}{i}" cross_validation( model_package=model_search_configuration[0], model_class=model_search_configuration[1], id=step_name ) after.append(step_name) select_best_model(search_steps_prefix=search_steps_prefix, after=after) ``` For more details on the attributes and methods of `PipelineContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md === ### Accessing Meta Information in Real-Time Within Your Pipeline #### Fetching Metadata in Steps To access information about the currently running pipeline or step, use the `zenml.get_step_context()` function to retrieve the `StepContext`: ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() pipeline_name = step_context.pipeline.name run_name = step_context.pipeline_run.name step_name = step_context.step_run.name ``` You can also determine where the outputs of your step will be stored and identify the Materializer class used for saving them: ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() uri = step_context.get_output_artifact_uri() # Output storage URI materializer = step_context.get_output_materializer() # Output Materializer ``` For more details on the attributes and methods available in `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-step.md === ### Summary: Attaching Metadata to a Step in ZenML In ZenML, you can log metadata for a specific step using the `log_metadata` function, which allows you to attach a dictionary of key-value pairs as metadata. This metadata can include any JSON-serializable value, such as custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Step When called within a step, `log_metadata` attaches the metadata to the currently executing step and its pipeline run, making it suitable for logging metrics available during execution. **Example:** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata(metadata={"evaluation_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall}}) return classifier ``` **Note:** In cached pipeline executions, metadata from the original step will be copied to the cached run, but manually generated metadata after execution will not be included. #### Manually Logging Metadata After Execution You can also log metadata for a specific step post-execution using identifiers for the pipeline, step, and run. **Example:** ```python from zenml import log_metadata log_metadata(metadata={"additional_info": {"a_number": 3}}, step_name="step_name", run_id_name_or_prefix="run_id_name_or_prefix") # or log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: **Example:** ```python from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] print(step.run_metadata["metadata_key"]) ``` **Note:** Fetching metadata with a specific key will return the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md === # Tracking and Comparing Metrics and Metadata in ZenML ZenML provides a unified `log_metadata` function to log and manage metrics and metadata across various entities such as models, artifacts, steps, and runs. ## Logging Metadata ### Basic Use-Case You can log metadata within a step using the following code: ```python from zenml import step, log_metadata @step def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` This logs the `accuracy` for the step, its pipeline run, and the model version if provided. ### Real-World Example Here’s an example of logging operational and performance metrics in a machine learning pipeline: ```python from zenml import step, pipeline, log_metadata @step def process_engine_metrics() -> float: log_metadata({ "engine_temperature": 3650, # Kelvin "fuel_consumption_rate": 245, # kg/s "thrust_efficiency": 0.92, }) return 0.92 @step def analyze_flight_telemetry(efficiency: float) -> None: log_metadata({ "altitude": 220000, # meters "velocity": 7800, # m/s "fuel_remaining": 2150, # kg "mission_success_prob": 0.9985, }) @pipeline def telemetry_pipeline(): efficiency = process_engine_metrics() analyze_flight_telemetry(efficiency) ``` This data can be visualized in the ZenML Pro dashboard. ## Visualizing and Comparing Metadata (Pro) Once metadata is logged, you can analyze and compare metrics using the Experiment Comparison tool in ZenML Pro. ### Comparison Views 1. **Table View**: Compare metadata across runs with automatic change tracking. 2. **Parallel Coordinates Plot**: Visualize relationships between different metrics. The tool supports comparison of up to 20 pipeline runs and any numerical metadata (`float` or `int`). ### Additional Use-Cases The `log_metadata` function allows logging to various entities with flexible parameters. For more details, refer to: - [Log metadata to a step](attach-metadata-to-a-step.md) - [Log metadata to a run](attach-metadata-to-a-run.md) - [Log metadata to an artifact](attach-metadata-to-an-artifact.md) - [Log metadata to a model](attach-metadata-to-a-model.md) **Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for future implementations. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/promote-a-model.md === # Promote a Model ## Stages and Promotion Model promotion stages represent the lifecycle progress of different model versions in ZenML. A model version can be promoted through the Dashboard, ZenML CLI, or Python SDK. Stages include: - **staging**: Ready for production. - **production**: Active in production. - **latest**: Represents the most recent version (not a promotion target). - **archived**: No longer relevant, moved from another stage. ### Promotion Methods #### CLI Promotion Use the following command to promote a model version via CLI: ```bash zenml model version update iris_logistic_regression --stage=... ``` #### Cloud Dashboard Promotion Promotion through the ZenML Pro dashboard is forthcoming. #### Python SDK Promotion The most common method for promoting models: ```python from zenml import Model from zenml.enums import ModelStages MODEL_NAME = "iris_logistic_regression" model = Model(name=MODEL_NAME, version="1.2.3") model.set_stage(stage=ModelStages.PRODUCTION) latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST) latest_model.set_stage(stage=ModelStages.STAGING) ``` In a pipeline context, retrieve the model from the step context: ```python from zenml import get_step_context, step, pipeline from zenml.enums import ModelStages @step def promote_to_staging(): model = get_step_context().model model.set_stage(ModelStages.STAGING, force=True) @pipeline def train_and_promote_model(): ... promote_to_staging(after=["train_and_evaluate"]) ``` ## Fetching Model Versions by Stage Load the appropriate model version by specifying the stage: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training logic here ``` ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/linking-model-binaries-data-to-models.md === # Linking Model Binaries/Data to Models in ZenML ZenML allows linking artifacts generated during pipeline runs to models for lineage tracking and transparency. Artifacts can be linked in several ways: ## 1. Configuring the Model at Pipeline Level You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorator: ```python from zenml import Model, pipeline model = Model(name="my_model", version="1.0.0") @pipeline(model=model) def my_pipeline(): ... ``` This automatically links all artifacts from the pipeline run to the specified model. ## 2. Saving Intermediate Artifacts To save intermediate work, use the `save_artifact` function. If the step has a Model context configured, it will automatically link to it: ```python from zenml import step, Model from zenml.artifacts.utils import save_artifact import pandas as pd from typing_extensions import Annotated from zenml.artifacts.artifact_config import ArtifactConfig @step(model=Model(name="MyModel", version="1.2.42")) def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig("trained_model")]: for epoch in epochs: checkpoint = model.train(epoch) save_artifact(data=checkpoint, name="training_checkpoint", version=f"1.2.42_{epoch}") return model ``` This saves checkpoints with distinct versions linked to `MyModel`. ## 3. Linking Artifacts Explicitly To link an artifact to a model outside the step context, use `link_artifact_to_model`: ```python from zenml import step, Model, link_artifact_to_model, save_artifact from zenml.client import Client @step def f_() -> None: new_artifact = save_artifact(data="Hello, World!", name="manual_artifact") link_artifact_to_model(artifact_version_id=new_artifact.id, model=Model(name="MyModel", version="0.0.42")) existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_artifact") link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` This allows linking of both new and existing artifacts to models. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/delete-a-model.md === ### Deleting Models in ZenML **Overview**: Deleting a model or its specific version removes all links to artifacts, pipeline runs, and associated metadata. #### Deleting All Versions of a Model - **CLI Command**: ```shell zenml model delete ``` - **Python SDK**: ```python from zenml.client import Client Client().delete_model() ``` #### Deleting a Specific Version of a Model - **CLI Command**: ```shell zenml model version delete ``` - **Python SDK**: ```python from zenml.client import Client Client().delete_model_version() ``` This documentation provides commands for deleting models and their versions using both the CLI and Python SDK. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/model-versions.md === # Model Versions Overview Model versions allow tracking of different iterations in the machine learning training process, providing functionality for the ML lifecycle. You can associate model versions with stages (e.g., production) and link them to non-technical artifacts like datasets. ## Explicitly Naming Model Versions To explicitly name a model version, use the `version` argument in the `Model` object. If omitted, ZenML auto-generates a version number. ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="1.0.5") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` If the model version exists, it automatically associates with the pipeline context. ## Templated Naming for Model Versions For continuous projects, use templated naming for unique and semantically searchable model versions. ```python from zenml import Model, step, pipeline model = Model( name="{team}_my_model", version="experiment_with_phi_3_{date}_{time}" ) @step(model=model) def llm_trainer(...) -> ...: ... @pipeline(model=model, substitutions={"team": "Team_A"}) def training_pipeline(...): # training happens here ``` This will generate a unique model version name at runtime, e.g., `experiment_with_phi_3_2024_08_30_12_42_53`. Substitutions can be set in decorators or functions for steps and pipelines. ## Fetching Model Versions by Stage Assign stages (e.g., `production`, `staging`) to model versions for semantic retrieval. Update the stage via CLI: ```shell zenml model version update MODEL_NAME --stage=STAGE ``` Fetch model versions by stage in the code: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` ## Autonumbering of Versions ZenML auto-numbers model versions. If no version is specified, it generates a new version number. ```python from zenml import Model, step model = Model(name="my_model", version="even_better_version") @step(model=model) def svc_trainer(...) -> ...: ... ``` ZenML tracks the iteration sequence: ```python from zenml import Model earlier_version = Model(name="my_model", version="really_good_version").number # == 5 updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` This ensures that each new version is sequentially numbered. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === ### Summary: Structuring an MLOps Project This documentation outlines how to structure an MLOps project using ZenML, focusing on the integration of artifacts, models, and pipelines. #### Key Components: 1. **Pipelines**: An MLOps project typically includes multiple pipelines: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using processed data. - **Inference Pipeline**: Runs predictions on trained models. - **Deployment Pipeline**: Deploys models to production. The structure of these pipelines can vary based on project requirements, and information transfer (artifacts, models, metadata) between them is essential. #### Common Patterns for Artifact Exchange: 1. **Artifact Exchange via `Client`**: - Use the ZenML Client to fetch artifacts between pipelines. - Example code for fetching datasets from a feature engineering pipeline to a training pipeline: ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` Note: Artifacts are references, not materialized in memory during pipeline compilation. 2. **Artifact Exchange via `Model`**: - Use a ZenML Model as a reference point for artifacts. - Example code for fetching a model in an inference pipeline: ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model inference_data = load_data() predict(model=model.get_model_artifact("trained_model"), data=inference_data) if __name__ == "__main__": do_predictions() ``` Both approaches for artifact exchange are valid; the choice depends on user preference. For further details on repository structure, refer to the best practices section. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-a-model-in-code.md === # Summary of ZenML Model Loading Documentation ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline To load the active model within a pipeline, use the following code: ```python from zenml import step, pipeline, get_step_context, Model @pipeline(model=Model(name="my_model")) def my_pipeline(): ... @step def my_step(): mv = get_step_context().model # Get model from active step context # Access model metadata print(mv.run_metadata["metadata_key"].value) # Fetch an associated artifact output = mv.get_artifact("my_dataset", "my_version") print(output.run_metadata["accuracy"].value) ``` ### 2. Load Any Model via the Client To load any model, utilize the `Client` class as shown below: ```python from zenml import step from zenml.client import Client from zenml.enums import ModelStages @step def model_evaluator_step(): try: # Retrieve the staging model version staging_zenml_model = Client().get_model_version( model_name_or_id="", model_version_name_or_number_or_id=ModelStages.STAGING, ) except KeyError: staging_zenml_model = None ``` This documentation outlines how to load models in ZenML using both the active model in a pipeline and the Client for any model. Key methods include accessing model metadata and fetching associated artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md === # Model Registration in ZenML Models can be registered in ZenML through various methods: explicitly using the CLI or Python SDK, or implicitly during a pipeline run. ZenML Pro users can also utilize a dashboard interface for model registration. ## Explicit CLI Registration To register a model via the CLI, use the following command: ```bash zenml model register iris_logistic_regression --license=... --description=... ``` For additional options, run `zenml model register --help`. You can also add tags using the `--tag` option. ## Explicit Dashboard Registration ZenML Pro users can register models directly from the cloud dashboard interface. ## Explicit Python SDK Registration To register a model using the Python SDK: ```python from zenml.client import Client Client().create_model( name="iris_logistic_regression", license="Copyright (c) ZenML GmbH 2023", description="Logistic regression model trained on the Iris dataset.", tags=["regression", "sklearn", "iris"], ) ``` ## Implicit Registration by ZenML Models are commonly registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator. Here’s an example: ```python from zenml import pipeline, Model @pipeline( enable_cache=False, model=Model( name="demo", license="Apache", description="Show case Model Control Plane.", ), ) def train_and_promote_model(): ... ``` Running this pipeline creates a new model version, linking it to the relevant artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === ### Summary: Loading Artifacts from Models in ZenML This documentation outlines how to load artifacts from a model in a two-pipeline project where the first pipeline handles training and the second performs batch inference using the trained model artifacts. #### Key Points: 1. **Model Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is evaluated at runtime, ensuring the correct model version is used. 2. **Artifact Loading**: Load model artifacts using `model.get_model_artifact("trained_model")`. This retrieval occurs during the step execution, allowing for delayed materialization. 3. **Alternative Method**: You can also use the `Client` methods to access the model directly: ```python from zenml.client import Client @pipeline def do_predictions(): model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) inference_data = load_data() predict( model=model.get_model_artifact("trained_model"), data=inference_data, ) ``` 4. **Execution Timing**: The actual evaluation of the model artifact occurs when the step is executed, ensuring that the most current model version is utilized. This concise guide provides the essential steps and considerations for loading model artifacts in ZenML pipelines. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/associate-a-pipeline-with-a-model.md === # Summary of Model-Pipeline Association in ZenML ## Overview In ZenML, a common use case is to associate a pipeline with a model. This association allows for versioning and tagging of models for better organization and filtering. ## Code Example To associate a pipeline with a model, use the following code: ```python from zenml import pipeline from zenml import Model from zenml.enums import ModelStages @pipeline( model=Model( name="ClassificationModel", # Unique model name tags=["MVP", "Tabular"], # Tags for filtering version=ModelStages.LATEST # Specify model stage: [STAGING, PRODUCTION] ) ) def my_pipeline(): ... ``` If the model already exists, a new version will be created. To attach the pipeline to an existing model version, specify the version accordingly. ## Configuration Files Model configurations can also be managed through configuration files: ```yaml model: name: text_classifier description: A breast cancer classifier tags: ["classifier", "sgd"] ``` This allows for centralized management of model attributes. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/README.md === # Use the Model Control Plane A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and essential business data, representing your ML product's business logic. It can be viewed as a "project" or "workspace." **Key Points:** - A significant artifact associated with a ZenML Model is the technical model, which contains the model file(s) with weights and parameters from training. Other relevant artifacts include training data and production predictions. - Models are first-class citizens in ZenML, accessible through the ZenML API, client, and the [ZenML Pro](https://zenml.io/pro) dashboard. - Models capture lineage information and allow staging of different versions, enabling reliance on specific stages (e.g., `Production`) for decision-making based on business rules. - The Model Control Plane provides a unified interface to manage models, integrating pipeline logic, artifacts, and the technical model. For a complete example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). ================================================== === File: docs/book/how-to/advanced-topics/README.md === # Advanced Topics in ZenML This section addresses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. ### Key Features 1. **Custom Components**: Users can create custom components to extend ZenML's capabilities. Components can be defined using Python functions or classes, allowing for tailored data processing and model training. 2. **Pipelines**: Advanced pipeline configurations enable users to define complex workflows. Pipelines can include multiple steps, parallel execution, and conditional branching. 3. **Artifact Management**: ZenML provides mechanisms for managing artifacts generated during pipeline execution. Users can track, version, and store artifacts for reproducibility. 4. **Integration with ML Tools**: ZenML supports integration with various machine learning tools and platforms, allowing seamless data flow and model deployment. 5. **Configuration Management**: Users can manage configurations through environment variables or configuration files, facilitating different setups for development, testing, and production. ### Example Code Snippet ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): # Define pipeline steps step1 = custom_component1() step2 = custom_component2(step1) return step2 # Execute the pipeline my_pipeline.run() ``` ### Conclusion Advanced configurations in ZenML empower users to create customized, efficient, and reproducible machine learning workflows, enhancing the overall data science process. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === ### Summary: Using a Prebuilt Image for ZenML Pipeline Execution ZenML allows you to skip building a Docker image for your pipeline by using a prebuilt image, which can save time and costs. When running a pipeline on a remote Stack, ZenML typically builds a Docker image with the base ZenML image and your project dependencies. This process can be time-consuming due to pulling base layers and pushing the final image to a container registry. #### Key Points: - **Prebuilt Image Usage**: To use a prebuilt image, set the `parent_image` attribute in the `DockerSettings` class and `skip_build` to `True`. ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` - **Image Requirements**: The specified `parent_image` must contain all necessary dependencies and optionally your code files if no code repository is registered and `allow_download_from_artifact_store` is `False`. - **Stack and Integration Requirements**: Ensure your image includes: - **Stack Requirements**: Retrieve using: ```python from zenml.client import Client Client().set_active_stack() stack_requirements = Client().active_stack.requirements() ``` - **Integration Requirements**: Gather dependencies for required integrations: ```python from zenml.integrations.registry import integration_registry from zenml.integrations.constants import HUGGINGFACE, PYTORCH import itertools required_integrations = [PYTORCH, HUGGINGFACE] integration_requirements = set( itertools.chain.from_iterable( integration_registry.select_integration_requirements( integration_name=integration, target_os=OperatingSystemType.LINUX, ) for integration in required_integrations ) ) ``` - **Project-Specific and System Packages**: Include additional dependencies in your `Dockerfile`: ```Dockerfile RUN pip install -r FILE RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES ``` - **Project Code Files**: Ensure your pipeline code is available: - If a code repository is registered, ZenML will handle code retrieval. - If `allow_download_from_artifact_store` is `True`, ZenML will upload your code. - If both options are disabled, include your code files in the image (not recommended). Ensure Python, `pip`, and `zenml` are installed in your image, and that your code is located in the `/app` directory. **Note**: Using a prebuilt image means you won't receive updates to your code or dependencies unless they are included in the image. ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === ### ZenML Image Building Overview ZenML determines the root directory of source files in the following order: 1. If `zenml init` has been run in the current or a parent directory, that directory is used. 2. If not, the parent directory of the executing Python file is the source root. You can control file handling in the root directory using three attributes in `DockerSettings`: - **`allow_download_from_code_repository`**: If `True`, files are downloaded from a registered code repository (with no local changes) instead of being included in the image. - **`allow_download_from_artifact_store`**: If the previous option is `False` or no suitable repository exists, and this is `True`, ZenML archives and uploads your code to the artifact store. - **`allow_including_files_in_images`**: If both previous options are `False`, this option allows including files in the Docker image, requiring a new image build for code changes. **Warning**: Setting all attributes to `False` is not recommended, as it may lead to unexpected behavior. You must ensure all files are correctly placed in the Docker images for pipeline execution. ### File Management - **Excluding Files**: Use a `.gitignore` file in a Git repository to exclude files when downloading from a code repository. - **Including Files**: To exclude files from the Docker image and reduce size, use a `.dockerignore` file: - Place a `.dockerignore` file in the source root. - Alternatively, specify a `.dockerignore` file in the `DockerSettings`: ```python docker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` This setup allows for efficient management of files in ZenML Docker images. ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md === ### Summary of Docker Settings Customization in ZenML In ZenML, you can customize Docker settings at the step level, allowing different steps in a pipeline to use distinct Docker images. By default, all steps utilize the Docker image defined at the pipeline level. To specify a different image for a step, use the `DockerSettings` in the step decorator. **Example of Step Decorator with DockerSettings:** ```python from zenml import step from zenml.config import DockerSettings @step( settings={ "docker": DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime" ) } ) def training(...): ... ``` Alternatively, Docker settings can be defined in a configuration file: **Example Configuration File:** ```yaml steps: training: settings: docker: parent_image: pytorch/pytorch:2.2.0-cuda11.8-cudnn8-runtime required_integrations: - gcp - github requirements: - zenml - numpy ``` This customization allows for flexibility in managing dependencies and integrations specific to each step in the pipeline. ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md === ### How to Use a Private PyPI Repository To use a private PyPI repository that requires authentication, follow these steps: 1. **Store Credentials Securely**: Use environment variables for sensitive information. 2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installation. 3. **Custom Docker Images**: Consider creating Docker images that include the necessary authentication. #### Example Code for Authentication Setup ```python import os from my_simple_package import important_function from zenml.config import DockerSettings from zenml import step, pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ['PYPI_TOKEN']}@my-private-pypi-server.com/{os.environ['PYPI_USERNAME']}/"} ) @step def my_step(): return important_function() @pipeline(settings={"docker": docker_settings}) def my_pipeline(): my_step() if __name__ == "__main__": my_pipeline() ``` **Important Note**: Handle credentials with care and use secure methods for managing and distributing authentication information within your team. ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md === ### Summary of Build Reuse in ZenML #### Overview ZenML allows for the reuse of pipeline builds to enhance the speed of pipeline runs. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. #### What is a Build? A pipeline build contains: - Docker images with stack requirements. - Integrations and user specifications. - Optionally, the pipeline code. **Listing Builds:** ```bash zenml pipeline builds list --pipeline_id='startswith:ab53ca' ``` **Creating a Build:** ```bash zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance ``` #### Reusing Builds ZenML automatically checks for existing builds that match the pipeline and stack. You can specify a build ID to force the use of a specific build. However, using a custom build means the code executed will be from the Docker image, not your local changes. To include local changes, disconnect your code from the build by either registering a code repository or using the artifact store. #### Using the Artifact Store If no code repository is detected, ZenML will upload your code to the artifact store by default unless `allow_download_from_artifact_store` is set to `False` in `DockerSettings`. #### Connecting Code Repositories Registering a code repository speeds up Docker builds by allowing ZenML to build images without source files and download them before execution. This also enables reuse of images built by colleagues. ZenML automatically identifies matching builds, so you don’t need to specify the build ID if the repository is clean. **Install GitHub Integration:** ```sh zenml integration install github ``` #### Detecting Local Code Repository Checkouts ZenML checks if the files used in a pipeline are tracked in registered repositories by computing the source root and verifying its inclusion in local checkouts. #### Tracking Code Versions When a local checkout is detected, ZenML stores the current commit reference for the pipeline run, ensuring the exact code version is used. This reference is only tracked if the checkout is clean. **Ignore Untracked Files:** Set `ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=True` to ignore untracked files, but ensure committed files are sufficient for pipeline execution. #### Best Practices - Ensure the local checkout is clean and the latest commit is pushed for successful file downloads. - Refer to the documentation for options to enforce or disable file downloading. This summary provides essential information about build reuse in ZenML, focusing on builds, reusing strategies, and best practices for effective pipeline management. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === # Specifying Pip Dependencies and Apt Packages The configuration for specifying pip and apt dependencies applies only to remote pipelines and is ignored for local pipelines. When a pipeline runs with a remote orchestrator, a Dockerfile is generated at runtime to build the Docker image. ## DockerSettings Import `DockerSettings` using: ```python from zenml.config import DockerSettings ``` ### Default Behavior ZenML installs all packages required by the active stack automatically. You can specify additional packages in several ways: 1. **Replicate Local Python Environment**: ```python docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Custom Command for Requirements**: ```python docker_settings = DockerSettings(replicate_local_python_environment=[ "poetry", "export", "--extras=train", "--format=requirements.txt" ]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 3. **Specify Requirements in Code**: ```python docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 4. **Specify a Requirements File**: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 5. **Specify ZenML Integrations**: ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 6. **Specify Apt Packages**: ```python docker_settings = DockerSettings(apt_packages=["git"]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 7. **Prevent Automatic Installation of Stack Requirements**: ```python docker_settings = DockerSettings(install_stack_requirements=False) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ### Custom Docker Settings for Steps If pipeline steps have conflicting requirements, specify custom Docker settings for those steps: ```python docker_settings = DockerSettings(requirements=["tensorflow"]) @step(settings={"docker": docker_settings}) def my_training_step(...): ... ``` ### Installation Order ZenML installs requirements in the following order: - Local Python environment packages - Stack requirements (unless disabled) - Required integrations - Specified requirements ### Additional Installer Arguments You can specify additional arguments for the Python package installer: ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ### Experimental: Use `uv` for Package Installation To use `uv` for faster package resolution: ```python docker_settings = DockerSettings(python_package_installer="uv") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` Note: `uv` is experimental and may cause installation errors; switch back to `pip` if issues arise. For more details on `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ================================================== === File: docs/book/how-to/customize-docker-builds/use-your-own-docker-files.md === # Custom Docker Files in ZenML ZenML allows users to specify a custom Dockerfile, build context directory, and build options for dynamic parent image creation during pipeline execution. The build process is as follows: - **No Dockerfile Specified**: If requirements, environment variables, or file copying necessitate an image build, ZenML will create one. Otherwise, it uses the specified `parent_image`. - **Dockerfile Specified**: ZenML builds an image from the provided Dockerfile. If further requirements necessitate an additional image, it will build a second one; otherwise, the first image is used for the pipeline. The installation order for packages, based on the `DockerSettings` configuration, is: 1. Local Python environment packages. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. *Note: The intermediate image may also be used directly for pipeline execution.* ## Example Code ```python docker_settings = DockerSettings( dockerfile="/path/to/dockerfile", build_context_root="/path/to/build/context", parent_image_build_config={ "build_options": ..., "dockerignore": ... } ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === ### Image Builder Definition in ZenML ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, it builds Docker images to run pipelines in isolated environments. By default, execution environments are created locally using the local Docker client, which requires Docker installation and permissions. ZenML provides **image builders**, a specialized stack component for building and pushing Docker images in a dedicated environment. If no image builder is configured in your stack, ZenML defaults to the local image builder, ensuring consistency across builds. Users do not need to interact directly with image builders in code; as long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by any component requiring container image builds. ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md === ### Summary: Using Docker Images to Run Your Pipeline When running a pipeline with a remote orchestrator, a Dockerfile is dynamically generated at runtime to build a Docker image using the ZenML image builder. The Dockerfile includes the following steps: 1. **Base Image**: Starts from a parent image with ZenML installed, defaulting to the official ZenML image for the current Python environment. Custom base images can be specified. 2. **Install Dependencies**: Automatically installs required pip dependencies based on the integrations used in the stack. Custom dependencies can also be included. 3. **Copy Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. For customization, the `DockerSettings` class is used, which can be configured in several ways: - **Pipeline-wide Settings**: ```python from zenml.config import DockerSettings docker_settings = DockerSettings() @pipeline(settings={"docker": docker_settings}) def my_pipeline(): my_step() ``` - **Step-specific Settings**: ```python @step(settings={"docker": docker_settings}) def my_step(): pass ``` - **YAML Configuration**: ```yaml settings: docker: ... steps: step_name: settings: docker: ... ``` ### Specifying Docker Build Options To specify build options for the image builder, use: ```python docker_settings = DockerSettings(build_config={"build_options": {...}}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` For MacOS with ARM architecture, specify the target platform: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) ``` ### Custom Parent Images To use a custom parent image, ensure it has Python, pip, and ZenML installed. Specify the image in Docker settings: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` To skip Docker builds and run steps directly: ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) ``` **Note**: Using a custom parent image may lead to unintended behavior; ensure code files are correctly included in the specified image. For more details, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === ### Using Docker Images to Run Your Pipeline ZenML executes pipeline steps sequentially in the local Python environment. However, when using remote orchestrators or step operators, ZenML builds Docker images to run pipelines in an isolated environment. This section covers how to customize the Docker build process. **Key Points:** - **Docker Integration:** ZenML leverages Docker for running pipelines in a controlled environment. - **Execution Context:** Local execution uses the active Python environment, while remote execution utilizes Docker images. - **Customization:** Users can control the Dockerization process for their pipelines. For further details, refer to the documentation on [cloud orchestration](../../user-guide/production-guide/cloud-orchestration.md) and [step operators](../../component-guide/step-operators/step-operators.md). ================================================== === File: docs/book/how-to/pipeline-development/README.md === # Pipeline Development in ZenML This section outlines the essential components of developing pipelines in ZenML, focusing on key concepts and practices. ## Key Components 1. **Pipelines**: A pipeline is a sequence of steps that define the workflow of data processing and model training. 2. **Steps**: Each step in a pipeline represents a specific task, such as data ingestion, preprocessing, model training, or evaluation. 3. **Components**: Steps can be built using reusable components, which encapsulate functionality and can be shared across different pipelines. 4. **Orchestrators**: ZenML supports various orchestrators (e.g., Apache Airflow, Kubeflow) to manage pipeline execution and scheduling. 5. **Artifact Management**: ZenML tracks artifacts generated during pipeline execution, enabling reproducibility and versioning. ## Development Process 1. **Define Pipeline**: Use the `@pipeline` decorator to define a pipeline and specify the sequence of steps. ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): step1() step2() ``` 2. **Create Steps**: Define steps using the `@step` decorator. ```python from zenml.steps import step @step def step1(): # Step logic here @step def step2(): # Step logic here ``` 3. **Run Pipeline**: Execute the pipeline using the ZenML CLI or programmatically. 4. **Monitor and Debug**: Utilize logging and monitoring tools integrated with ZenML to track pipeline performance and troubleshoot issues. ## Best Practices - Modularize steps for reusability. - Use version control for artifacts. - Document pipeline configurations for clarity. This concise overview provides the necessary information for understanding pipeline development in ZenML, ensuring that critical details are retained while maintaining brevity. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md === # Limitations of Defining Steps in Notebook Cells To run ZenML steps defined in notebook cells remotely (using a remote orchestrator or step operator), the following conditions must be met: - The cell can only contain Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. - The cell **must not** call code from other notebook cells. However, functions or classes from Python files are permitted. - The cell **must not** rely on imports from previous cells; it must include all necessary imports itself, including ZenML imports like `from zenml import step`. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md === ### Summary of Running a Single Step from a Notebook in ZenML To execute a single step remotely from a notebook, call the step like a regular Python function. ZenML will create a pipeline for this step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining steps in notebook cells. #### Example Code ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc X_train = pd.DataFrame(...) # Define your training data y_train = pd.Series(...) # Define your training labels # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` This code defines a step for training a Support Vector Classifier (SVC) and executes it, returning the trained model and training accuracy. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/README.md === ### Summary: Running Remote Pipelines from Jupyter Notebooks with ZenML ZenML allows users to define and execute steps and pipelines directly from Jupyter Notebooks, running them remotely in Docker containers. To facilitate this, the notebook cells must adhere to specific conditions. #### Key Points: - **Execution Environment**: Code from notebook cells is extracted and executed as Python modules in remote Docker containers. - **Conditions**: Notebook cells defining steps must meet certain requirements (details in the linked documentation). #### Related Documentation: - [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) - [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === ### Summary of Documentation #### Autogenerate a Template YAML File To create a configuration file for your pipeline, use the `.write_run_configuration_template()` method. This generates a YAML file with all options commented out, allowing you to select relevant settings. #### Example Code ```python from zenml import pipeline @pipeline(enable_cache=True) def simple_ml_pipeline(parameter: int): dataset = load_data(parameter=parameter) train_model(dataset) simple_ml_pipeline.write_run_configuration_template(path="") ``` #### Generated YAML Configuration Template The generated YAML template includes various sections that can be customized: - **General Settings** - `build`: Pipeline build configuration - `enable_artifact_metadata`: Optional boolean - `enable_cache`: Optional boolean - `parameters`: Optional mapping for parameters - `run_name`: Optional string for run identification - **Model Configuration** - `model`: Contains fields like `name`, `description`, `tags`, and `version`. - **Schedule Settings** - `schedule`: Defines scheduling options like `cron_expression` and `start_time`. - **Docker Settings** - `settings.docker`: Configuration for Docker, including `apt_packages`, `environment`, and `requirements`. - **Steps Configuration** - Each step (e.g., `load_data`, `train_model`) includes: - `enable_cache`: Optional boolean - `model`: Similar fields as in the model configuration - `settings`: Docker and resource settings for the step #### Customizing for a Specific Stack To configure the pipeline for a specific stack, use: ```python simple_ml_pipeline.write_run_configuration_template(stack=) ``` This allows for tailored configurations based on the chosen stack. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-configuration.md === ### Summary of ZenML Runtime Configuration Documentation **Overview** ZenML allows configuration of pipelines through `Settings`, which manage runtime configurations for stack components. Key areas of configuration include resource requirements, containerization processes, and specific settings for stack components. **Central Concept** All configurations are managed through `BaseSettings`, which is synonymous with `settings`. **Types of Settings** 1. **General Settings**: Applicable to all ZenML pipelines. - Examples: - `DockerSettings`: Docker configuration. - `ResourceSettings`: Resource allocation settings. 2. **Stack-Component-Specific Settings**: Provide runtime configurations for specific stack components. The key format is `` or `.`. Settings for inactive components are ignored. - Examples: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` - `MLflowExperimentTrackerSettings` - `WandbExperimentTrackerSettings` - `WhylogsDataValidatorSettings` - `SagemakerStepOperatorSettings` - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` **Registration-Time vs. Real-Time Settings** - Registration-time settings are static and fixed for all pipeline runs. - Real-time settings can change with each run. For instance, `tracking_url` is set at registration, while `experiment_name` can vary per run. **Default Values** Default values can be specified for settings during stack component registration, which can be overridden at runtime. **Key Specification for Settings** When defining stack-component-specific settings, use the correct key format. If only the category is specified, ZenML applies settings to the appropriate component flavor in the stack. If incompatible, settings are ignored. **Example Code Snippets** 1. **Python Code**: ```python @step(step_operator="nameofstepoperator", settings={"step_operator": {"estimator_args": {"instance_type": "m7g.medium"}}}) def my_step(): ... @step(step_operator="nameofstepoperator", settings={"step_operator": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... ``` 2. **YAML Configuration**: ```yaml steps: my_step: step_operator: "nameofstepoperator" settings: step_operator: estimator_args: instance_type: m7g.medium ``` This documentation provides a comprehensive guide to configuring runtime settings in ZenML, ensuring flexibility and adaptability in pipeline management. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md === To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. ### Key Steps: 1. Load the pipeline run using its name. 2. Access the general configuration or the configuration of a specific step. ### Code Example: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run() # General configuration pipeline_run.config # Specific step configuration pipeline_run.steps[].config ``` This allows you to retrieve the configurations effectively. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/how-to-use-config.md === ### Configuration Files in ZenML **Best Practice**: Use YAML files for configuration to separate config from code, although configurations can also be specified directly in code. **Applying Configuration**: Use the `with_options(config_path=)` pattern to apply configurations to a pipeline. #### Example YAML Configuration ```yaml enable_cache: False parameters: dataset_name: "best_dataset" steps: load_data: enable_cache: False ``` #### Example Python Code ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__=="__main__": simple_ml_pipeline.with_options(config_path=)() ``` **Functionality**: This setup runs `simple_ml_pipeline` with caching disabled for `load_data` and sets the `dataset_name` parameter to `best_dataset`. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/configuration-hierarchy.md === ### Configuration Hierarchy in ZenML In ZenML, configuration settings follow a specific hierarchy: 1. **Code Configurations**: Override YAML file configurations. 2. **Step-Level Configurations**: Override pipeline-level configurations. 3. **Attribute Merging**: Dictionaries are merged for attributes. ### Example Code ```python from zenml import pipeline, step from zenml.config import ResourceSettings @step def load_data(parameter: int) -> dict: ... @step(settings={"resources": ResourceSettings(gpu_count=1, memory="2GB")}) def train_model(data: dict) -> None: ... @pipeline(settings={"resources": ResourceSettings(cpu_count=2, memory="1GB")}) def simple_ml_pipeline(parameter: int): ... # Configuration results train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count: 1, memory: "2GB" simple_ml_pipeline.configuration.settings["resources"] # -> cpu_count: 2, memory: "1GB" ``` ### Key Points - Step configurations take precedence over pipeline configurations. - The `train_model` step uses its own resource settings while merging with the pipeline's settings. - The final resource settings reflect both step and pipeline configurations. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md === # Configuration Overview This documentation outlines the configuration options available in a YAML file for a ZenML pipeline. Below is a summary of key elements and their purposes. ## Example YAML Configuration ```yaml build: dcd6fafb-c200-4e85-8328-428bef98d804 # Docker image ID enable_artifact_metadata: True enable_artifact_visualization: False enable_cache: False enable_step_logs: True extra: any_param: 1 another_random_key: "some_string" model: name: "classification_model" version: production audience: "Data scientists" description: "This classifies hotdogs and not hotdogs" ethics: "No ethical implications" license: "Apache 2.0" limitations: "Only works for hotdogs" tags: ["sklearn", "hotdog", "classification"] parameters: dataset_name: "another_dataset" run_name: "my_great_run" schedule: catchup: true cron_expression: "* * * * *" settings: docker: apt_packages: ["curl"] copy_files: True dockerfile: "Dockerfile" dockerignore: ".dockerignore" environment: ZENML_LOGGING_VERBOSITY: DEBUG parent_image: "zenml-io/zenml-cuda" requirements: ["torch"] skip_build: False resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" steps: train_model: parameters: data_source: "best_dataset" experiment_tracker: "mlflow_production" step_operator: "vertex_gpu" outputs: {} failure_hook_source: {} success_hook_source: {} enable_artifact_metadata: True enable_artifact_visualization: True enable_cache: False enable_step_logs: True extra: {} model: {} settings: docker: {} resources: {} step_operator.sagemaker: estimator_args: instance_type: m7g.medium ``` ## Key Configuration Elements ### `enable_XXX` Flags - **`enable_artifact_metadata`**: Attach metadata to artifacts. - **`enable_artifact_visualization`**: Attach visualizations to artifacts. - **`enable_cache`**: Enable caching behavior. - **`enable_step_logs`**: Enable logging for steps. ### `build` ID Specifies the UUID of the Docker image to use, skipping the build process for remote orchestrators. ### `model` Defines the ZenML model used in the pipeline, including its name, version, description, and tags. ### `parameters` JSON-serializable parameters for the pipeline and individual steps. Step parameters override pipeline parameters. ### `run_name` Dynamic name for the run, which should not be static to avoid conflicts. ### `settings` Defines runtime configurations: - **Docker Settings**: Configuration for Docker builds. - **Resource Settings**: Specifies CPU, GPU, and memory allocation. ### Step-Specific Configuration Certain configurations apply only at the step level: - **`experiment_tracker`**: Tracker for the step. - **`step_operator`**: Operator for executing the step. - **`outputs`**: Configuration for output artifacts, including materializer sources. ### Hooks - **`failure_hook_source`** and **`success_hook_source`**: Specify sources for handling step failures and successes. This summary encapsulates the essential configurations and their purposes, ensuring clarity and conciseness while retaining critical technical details. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/README.md === ZenML allows for easy configuration and execution of pipelines using YAML files at runtime. These configuration files enable users to set parameters, manage caching behavior, and configure stack components. Key topics include: - **What can be configured**: Details on configurable elements. - **Configuration hierarchy**: Structure and precedence of configurations. - **Autogenerate a template YAML file**: Instructions for generating a default configuration file. For more information, refer to the respective sections linked in the documentation. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md === ### Summary: Creating Pipeline Variants for Local Development and Production in ZenML When developing ZenML pipelines, it's useful to create different variants for local development and production. This allows for rapid iteration during development while maintaining a robust setup for production. Key methods to create these variants include: 1. **Using Configuration Files**: Define pipeline configurations in YAML files. Example for a development variant: ```yaml enable_cache: False parameters: dataset_name: "small_dataset" steps: load_data: enable_cache: False ``` Apply this configuration using: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": ml_pipeline.with_options(config_path="path/to/config.yaml")() ``` Create separate files for development (`config_dev.yaml`) and production (`config_prod.yaml`). 2. **Implementing Variants in Code**: Directly create variants in your code using a boolean flag: ```python import os from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(is_dev: bool = False): dataset = "small_dataset" if is_dev else "full_dataset" load_data(dataset) if __name__ == "__main__": is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" ml_pipeline(is_dev=is_dev) ``` 3. **Using Environment Variables**: Determine which variant to run based on environment variables: ```python import os config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" ml_pipeline.with_options(config_path=config_path)() ``` Run the pipeline with: - `ZENML_ENVIRONMENT=dev python run.py` - `ZENML_ENVIRONMENT=prod python run.py` ### Development Variant Considerations For development variants, optimize for faster iteration: - Use smaller datasets - Specify a local execution stack - Reduce training epochs and batch size - Use a smaller base model Example configuration: ```yaml parameters: dataset_path: "data/small_dataset.csv" epochs: 1 batch_size: 16 stack: local_stack ``` Or in code: ```python @pipeline def ml_pipeline(is_dev: bool = False): dataset = "data/small_dataset.csv" if is_dev else "data/full_dataset.csv" epochs = 1 if is_dev else 100 batch_size = 16 if is_dev else 64 load_data(dataset) train_model(epochs=epochs, batch_size=batch_size) ``` By creating these variants, you can efficiently test and debug locally while maintaining a comprehensive production setup, enhancing your development workflow. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md === ### Summary of ZenML Pipeline Cleanliness Documentation #### Overview This documentation provides guidance on maintaining a clean pipeline environment during development in ZenML, focusing on managing pipeline runs, models, artifacts, and overall environment cleanliness. #### Key Points 1. **Run Locally**: - To avoid cluttering a shared server, disconnect from the remote server and run locally: ```bash zenml login --local ``` - Reconnect to the server with: ```bash zenml login ``` 2. **Pipeline Runs**: - **Unlisted Runs**: Create runs without associating them with a pipeline: ```python pipeline_instance.run(unlisted=True) ``` - **Deleting Pipeline Runs**: - Delete a specific run: ```bash zenml pipeline runs delete ``` - Delete all runs from the last 24 hours: ```python #!/usr/bin/env python3 import datetime from zenml.client import Client def delete_recent_pipeline_runs(): zc = Client() time_filter = (datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") for run in recent_runs: zc.delete_pipeline_run(run.id) print(f"Deleted {len(recent_runs)} pipeline runs.") if __name__ == "__main__": delete_recent_pipeline_runs() ``` 3. **Pipelines**: - **Deleting Pipelines**: ```bash zenml pipeline delete ``` - **Unique Pipeline Names**: Assign custom names to runs: ```python training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") training_pipeline() ``` 4. **Models**: - To delete a model: ```bash zenml model delete ``` 5. **Artifacts**: - **Pruning Artifacts**: ```bash zenml artifact prune ``` - Control deletion behavior with `--only-artifact` and `--only-metadata` flags. 6. **Cleaning Environment**: - Use `zenml clean` to delete all local pipelines, runs, and artifacts: ```bash zenml clean ``` - The `--local` flag deletes local files related to the active stack. By following these practices, users can maintain a clean and organized pipeline dashboard, focusing on relevant runs for their projects. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === # Develop Locally This section outlines best practices for developing pipelines locally, allowing for faster iteration and cost-effective testing. It is common to work with a smaller subset of data or synthetic data during local development. ZenML supports this workflow, guiding users to develop locally and then transition to running pipelines on more powerful remote hardware when needed. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.md === ### Summary: Inspecting a Finished Pipeline Run and Its Outputs #### Overview Once a pipeline run is completed, you can programmatically access various elements such as artifacts, metadata, and lineage of pipeline runs. #### Pipeline Hierarchy - **Structure**: Pipelines have a 1-to-N relationship with runs, which in turn have a 1-to-N relationship with steps, and steps have a 1-to-N relationship with artifacts. #### Fetching Pipelines - **Get a Specific Pipeline**: ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` - **List All Pipelines**: ```python pipelines = Client().list_pipelines() ``` Or via CLI: ```shell zenml pipeline list ``` #### Pipeline Runs - **Get All Runs**: ```python runs = pipeline_model.runs ``` - **Get Last Run**: ```python last_run = pipeline_model.last_run # or pipeline_model.runs[0] ``` - **Execute Pipeline and Get Latest Run**: ```python run = training_pipeline() ``` - **Fetch Specific Run**: ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` #### Run Information - **Status**: ```python status = run.status # Possible states: initialized, failed, completed, running, cached ``` - **Configuration**: ```python pipeline_config = run.config pipeline_settings = run.config.settings ``` - **Component-Specific Metadata**: ```python run_metadata = run.run_metadata orchestrator_url = run_metadata["orchestrator_url"].value ``` #### Steps - **Get Steps of a Run**: ```python steps = run.steps step = run.steps["first_step"] ``` #### Artifacts - **Access Output Artifacts**: ```python output = step.outputs["output_name"] my_pytorch_model = output.load() ``` - **Fetch Artifact Directly**: ```python artifact = Client().get_artifact('iris_dataset') output = artifact.versions['2022'] loaded_artifact = output.load() ``` #### Artifact Metadata - **Access Metadata**: ```python output_metadata = output.run_metadata storage_size_in_bytes = output_metadata["storage_size"].value ``` - **Visualizations**: ```python output.visualize() ``` #### Fetching Information During Execution You can also fetch information while a pipeline is running: ```python from zenml import get_step_context from zenml.client import Client @step def my_step(): current_run_name = get_step_context().pipeline_run.name current_run = Client().get_pipeline_run(current_run_name) previous_run = current_run.pipeline.runs[1] # index 0 is the current run ``` #### Code Example Here’s a complete example of loading a model from a pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(X_train=X_train, y_train=y_train, gamma=gamma) if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` This summary captures essential technical details and code snippets for inspecting and managing pipeline runs and their outputs in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/retry-steps.md === ### ZenML Step Retry Configuration ZenML offers a built-in retry mechanism to automatically retry steps upon failure, which is particularly useful for handling intermittent issues. This feature is beneficial when using GPU-backed hardware, where resource availability may fluctuate. #### Retry Parameters: 1. **max_retries:** Maximum retry attempts. 2. **delay:** Initial delay (in seconds) before the first retry. 3. **backoff:** Multiplier for the delay after each retry. #### Example Usage with @step Decorator: You can configure retries directly in your step definition: ```python from zenml.config.retry_config import StepRetryConfig @step( retry=StepRetryConfig( max_retries=3, delay=10, backoff=2 ) ) def my_step() -> None: raise Exception("This is a test exception") ``` #### Important Notes: - Infinite retries are not supported. ZenML enforces an internal maximum to prevent infinite loops. It is advisable to set a reasonable `max_retries` based on your use case. ### Related Documentation: - [Failure/Success Hooks](use-failure-success-hooks.md) - [Configure Pipelines](../../pipeline-development/use-configuration-files/how-to-use-config.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === ### Summary: Fan-in and Fan-out Patterns in ZenML The fan-out/fan-in pattern is a pipeline architecture that splits a single step into multiple parallel operations (fan-out) and consolidates the results back into a single step (fan-in). This approach is beneficial for parallel processing, distributed workloads, and data transformations. #### Example Code ```python from zenml import step, get_step_context, pipeline from zenml.client import Client @step def load_step() -> str: return "Hello from ZenML!" @step def process_step(input_data: str) -> str: return input_data @step def combine_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) processed_results = {step_info.name: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) @pipeline(enable_cache=False) def fan_out_fan_in_pipeline(parallel_count: int) -> None: input_data = load_step() after = [process_step(input_data, id=f"process_{i}") for i in range(parallel_count)] combine_step(step_prefix="process_", output_name="output", after=after) fan_out_fan_in_pipeline(parallel_count=8) ``` #### Key Points - **Fan-out**: Enables parallel processing of data. - **Fan-in**: Aggregates results from parallel steps. - **Use Cases**: - Parallel data processing - Distributed model training - Ensemble methods - Batch processing - Data validation - Hyperparameter tuning #### Limitations 1. Steps may run sequentially if the orchestrator does not support parallel execution. 2. The number of steps must be predetermined; dynamic step creation is not supported. This pattern enhances resource utilization and allows for efficient data processing workflows. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md === To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method along with the `last_run` property or by indexing the runs. Here’s a concise example: ```python from zenml.client import Client client = Client() # Retrieve a pipeline by its name p = client.get_pipeline("mlflow_train_deploy_pipeline") # Get the latest run of this pipeline latest_run = p.last_run # Access the first run by index first_run = p[0] ``` This code demonstrates how to access the latest and first runs of a specified pipeline. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/tag-your-pipeline-runs.md === # Tagging Pipeline Runs You can specify tags for your pipeline runs in the following ways: 1. **Configuration File**: ```yaml # config.yaml tags: - tag_in_config_file ``` 2. **In Code**: - Using the `@pipeline` decorator: ```python @pipeline(tags=["tag_on_decorator"]) def my_pipeline(): ... ``` - Using the `with_options` method: ```python my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) ``` When you run the pipeline, tags from all specified locations will be merged and applied to the pipeline run. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === ### Summary of Hyperparameter Tuning with ZenML **Overview**: This documentation describes how to perform hyperparameter tuning using ZenML through a simple pipeline that implements a grid search for different learning rates. **Key Components**: 1. **Train Step**: Trains a model using a specified learning rate. ```python @step def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: return ... # Model training logic ``` 2. **Selection Step**: Evaluates trained models to identify the best performing hyperparameters. ```python @step def selection_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) trained_models_by_lr = {} for step_name, step_info in run.steps.items(): if step_name.startswith(step_prefix): model = step_info.outputs[output_name][0].load() lr = step_info.config.parameters["learning_rate"] trained_models_by_lr[lr] = model for lr, model in trained_models_by_lr.items(): ... # Model evaluation logic ``` 3. **Pipeline Definition**: Constructs the pipeline to execute multiple training steps followed by the selection step. ```python @pipeline def my_pipeline(step_count: int) -> None: after = [] for i in range(step_count): train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") after.append(f"train_step_{i}") selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) my_pipeline(step_count=4) ``` **Challenges**: Currently, a variable number of artifacts cannot be passed programmatically into a step, necessitating the use of the ZenML Client to query artifacts from previous steps. **Additional Resources**: - For practical examples, refer to the E2E example in the ZenML GitHub repository, specifically the files for single search and model selection: - [`hp_tuning_single_search`](https://github.com/zenml-io/zenml/blob/main/examples/e2e/steps/hp_tuning/hp_tuning_single_search.py) - [`hp_tuning_select_best_model`](https://github.com/zenml-io/zenml/blob/main/examples/e2e/steps/hp_tuning/hp_tuning_select_best_model.py) This summary captures the essential technical details of hyperparameter tuning with ZenML while omitting redundancy. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === ### Summary of Pipeline Run Naming in ZenML Pipeline run names are automatically generated based on the date and time, as shown in the log output: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. ``` To customize the run name, use the `run_name` parameter in the `with_options()` method: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name" ) training_pipeline() ``` Run names must be unique. For multiple runs or scheduled executions, compute the run name dynamically or use placeholders that ZenML replaces. Placeholders can be set in the `@pipeline` decorator or `pipeline.with_options` function. Standard placeholders include: - `{date}`: Current date (e.g., `2024_11_27`) - `{time}`: Current UTC time (e.g., `11_07_09_326492`) Example of using placeholders in a run name: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name_{experiment_name}_{date}_{time}" ) training_pipeline() ``` ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/reference-environment-variables-in-configurations.md === # Reference Environment Variables in ZenML Configurations ZenML allows referencing environment variables in configurations using the placeholder syntax `${ENV_VARIABLE_NAME}`. ## In-code Example ```python from zenml import step @step(extra={"value_from_environment": "${ENV_VAR}"}) def my_step() -> None: ... ``` ## Configuration File Example ```yaml extra: value_from_environment: ${ENV_VAR} combined_value: prefix_${ENV_VAR}_suffix ``` This feature enhances flexibility in both code and configuration files. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/configuring-a-pipeline-at-runtime.md === ### Runtime Configuration of a Pipeline To run a pipeline with a different configuration, use the `pipeline.with_options` method. There are two primary ways to configure options: 1. **Explicit Configuration**: ```python with_options(steps={"trainer": {"parameters": {"param1": 1}}}) ``` 2. **YAML File**: ```python with_options(config_file="path_to_yaml_file") ``` For more details, refer to the [configuration options documentation](../../pipeline-development/use-configuration-files/README.md). **Exception**: To trigger a pipeline from a client or another pipeline, pass the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md === # Accessing Secrets in ZenML Steps ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. For configuration and creation of secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). You can access secrets in your steps using the ZenML `Client` API, enabling secure API queries without hard-coding access keys. ## Example Code ```python from zenml import step from zenml.client import Client from somewhere import authenticate_to_some_api @step def secret_loader() -> None: """Load the example secret from the server.""" secret = Client().get_secret("") authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ## Additional Resources - [Creating and managing secrets](../../interact-with-secrets.md) - [Secrets backend in ZenML](../../../getting-started/deploying-zenml/secret-management.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md === ### Summary of Parameterization in ZenML Pipelines **Parameterization Overview** Steps and pipelines in ZenML can be parameterized like standard Python functions. Parameters can be either **artifacts** (outputs from previous steps) or **explicit parameters** (values provided directly). **Key Points:** - **Artifacts**: Used to share data between steps. - **Parameters**: Values that configure step behavior, must be JSON-serializable via Pydantic. For non-JSON-serializable objects (e.g., NumPy arrays), use [External Artifacts](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline). **Example of Step and Pipeline Definition:** ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: pass @pipeline def my_pipeline(): int_artifact = some_other_step() my_step(input_1=int_artifact, input_2=42) ``` **Using YAML Configuration Files:** Parameters can be defined in a YAML file, allowing easy updates without changing the code. **YAML Configuration Example:** ```yaml parameters: environment: production steps: my_step: parameters: input_2: 42 ``` **Python Code Example with YAML:** ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: ... @pipeline def my_pipeline(environment: str): ... if __name__=="__main__": my_pipeline.with_options(config_path="config.yaml")() ``` **Conflict Handling:** Conflicts may arise when parameters are defined in both the YAML file and the code. ZenML will notify users of such conflicts. **Example of Conflict:** ```yaml parameters: some_param: 24 steps: my_step: parameters: input_2: 42 ``` ```python @pipeline def my_pipeline(some_param: int): my_step(input_1=42, input_2=43) # Conflict here ``` **Caching Behavior:** - **Parameters**: A step caches only if all parameter values match previous executions. - **Artifacts**: A step caches only if all input artifacts are identical to previous executions. If upstream steps are not cached, the step will always execute. ### Additional Resources: - [Use configuration files to set parameters](use-pipeline-step-parameters.md) - [How caching works and how to control it](control-caching-behavior.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-pipelines-asynchronously.md === ### Summary: Running Pipelines Asynchronously By default, pipelines run synchronously, meaning the terminal displays logs in real-time during execution. To run pipelines asynchronously, you can configure the orchestrator in two ways: 1. **Globally**: Set `synchronous=False` in the orchestrator settings. ```python from zenml import pipeline @pipeline(settings={"orchestrator": {"synchronous": False}}) def my_pipeline(): ... ``` 2. **Temporarily**: Modify the pipeline configuration in a YAML file. ```yaml settings: orchestrator.: synchronous: false ``` For more details on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/using-a-custom-step-invocation-id.md === ## Summary of Custom Step Invocation ID in ZenML When invoking a ZenML step in a pipeline, each invocation is assigned a unique **invocation ID**. This ID can be used to define the execution order of steps or to fetch information about the invocation post-execution. ### Key Points: - The first invocation of a step uses the step name as the invocation ID (e.g., `my_step`). - Subsequent invocations append a suffix (_2, _3, etc.) to the step name for uniqueness (e.g., `my_step_2`). - A custom invocation ID can be specified by passing an `id` parameter, which must be unique across all invocations in the pipeline. ### Example Code: ```python from zenml import pipeline, step @step def my_step() -> None: ... @pipeline def example_pipeline(): my_step() # ID: my_step my_step() # ID: my_step_2 my_step(id="my_custom_invocation_id") # Custom ID ``` This functionality allows for better tracking and management of step invocations within ZenML pipelines. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === ### Summary of ZenML Pipeline Composition ZenML allows for the composition of pipelines to avoid code duplication by extracting common functionality into separate functions. This enables the reuse of steps between different pipelines. #### Example Code ```python from zenml import pipeline @pipeline def data_loading_pipeline(mode: str): data = training_data_loader_step() if mode == "train" else test_data_loader_step() return preprocessing_step(data) @pipeline def training_pipeline(): training_data = data_loading_pipeline(mode="train") model = training_step(data=training_data) test_data = data_loading_pipeline(mode="test") evaluation_step(model=model, data=test_data) ``` In this example, `data_loading_pipeline` is called within `training_pipeline`, effectively making it a step in the latter. Only the parent pipeline appears in the dashboard. For triggering a pipeline from another, refer to the advanced usage documentation. #### Additional Resources - Learn more about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md === ### Summary of ZenML Hooks Documentation **Overview:** ZenML provides hooks to execute actions after step execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: `on_failure` (triggers on step failure) and `on_success` (triggers on step success). **Defining Hooks:** Hooks are defined as callback functions accessible within the pipeline repository. The `on_failure` hook can accept a `BaseException` argument to access the specific exception that caused the failure. **Example:** ```python from zenml import step def on_failure(exception: BaseException): print(f"Step failed: {str(exception)}") def on_success(): print("Step succeeded!") @step(on_failure=on_failure) def my_failing_step() -> int: raise ValueError("Error") @step(on_success=on_success) def my_successful_step() -> int: return 1 ``` **Pipeline-Level Hooks:** Hooks can also be defined at the pipeline level, which apply to all steps within that pipeline. Step-level hooks take precedence over pipeline-level hooks. **Example:** ```python from zenml import pipeline @pipeline(on_failure=on_failure, on_success=on_success) def my_pipeline(...): ... ``` **Accessing Step Information:** Inside hooks, you can use `get_step_context()` to access information about the current step or pipeline run. **Example:** ```python from zenml import get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) ``` **Using Alerter Component:** You can integrate the Alerter component to send notifications on success or failure. **Example:** ```python from zenml import get_step_context, Client def on_failure(): step_name = get_step_context().step_run.name Client().active_stack.alerter.post(f"{step_name} just failed!") ``` **OpenAI ChatGPT Failure Hook:** This hook generates suggestions for fixing exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and an API key stored in a ZenML secret. **Example:** ```shell zenml integration install openai zenml secret create openai --api_key= ``` ```python from zenml.integration.openai.hooks import openai_chatgpt_alerter_failure_hook @step(on_failure=openai_chatgpt_alerter_failure_hook) def my_step(...): ... ``` ### Key Points: - Hooks allow for post-execution actions on steps. - Two types of hooks: `on_failure` and `on_success`. - Hooks can be defined at both step and pipeline levels. - Access to step context is available within hooks. - Integration with Alerter and OpenAI for notifications and suggestions. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-an-individual-step.md === # Running Individual Steps in ZenML To execute an individual step in ZenML, call the step like a regular Python function. ZenML will create a pipeline with just that step and run it on the active stack. Note that this pipeline run will be `unlisted`, meaning it won't be linked to any specific pipeline, but it will appear in the "Runs" tab of the dashboard. ## Example Code for Step Execution ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc X_train = pd.DataFrame(...) # Replace with actual data y_train = pd.Series(...) # Replace with actual data # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` ## Running the Step Function Directly To run the step function without ZenML, use the `entrypoint(...)` method: ```python model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` ### Default Behavior Configuration To make direct function calls the default behavior, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will allow calling `svc_trainer(...)` to execute the function directly without using the ZenML stack. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/delete-a-pipeline.md === ### Deleting Pipelines and Pipeline Runs #### Delete a Pipeline To delete a pipeline, use either the CLI or the Python SDK: **CLI:** ```shell zenml pipeline delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_pipeline() ``` **Note:** Deleting a pipeline does not remove associated runs or artifacts. For deleting multiple pipelines, the Python SDK is recommended. If pipelines share a prefix, provide each pipeline's `id` separately. **Example Script for Deleting Multiple Pipelines:** ```python from zenml.client import Client client = Client() pipelines_list = client.list_pipelines(name="startswith:test_pipeline", size=100) target_pipeline_ids = [p.id for p in pipelines_list.items] if input("Do you really want to delete these pipelines? (y/n): ").lower() == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) print("Deletion complete") else: print("Deletion cancelled") ``` #### Delete a Pipeline Run To delete a pipeline run, use the following commands: **CLI:** ```shell zenml pipeline runs delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_pipeline_run() ``` This documentation provides essential commands and examples for deleting pipelines and pipeline runs using both CLI and Python SDK. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-execution-order-of-steps.md === # Control Execution Order of Steps in ZenML ZenML determines the execution order of pipeline steps based on data dependencies. For example, in the following pipeline, `step_3` relies on the outputs of `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1() step_2_output = step_2() step_3(step_1_output, step_2_output) ``` To impose additional execution constraints, you can specify non-data dependencies using invocation IDs. For example, to ensure `step_1` runs after `step_2`, use: ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1(after="step_2") step_2_output = step_2() step_3(step_1_output, step_2_output) ``` This configuration ensures `step_1` only starts after `step_2` has completed. For multiple upstream steps, pass a list to the `after` argument. For more details on invocation IDs, refer to the [documentation](using-a-custom-step-invocation-id.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/schedule-a-pipeline.md === ### Summary: Scheduling Pipelines in ZenML #### Supported Orchestrators ZenML supports scheduling for the following orchestrators: - **Supported**: Airflow, AzureML, Databricks, HyperAI, Kubeflow, Kubernetes, Sagemaker, Vertex - **Not Supported**: Local, LocalDocker, Skypilot (AWS, Azure, GCP, Lambda), Tekton #### Setting a Schedule To set a schedule for a pipeline, use the `Schedule` class with either a cron expression or human-readable notation: ```python from zenml.config.schedule import Schedule from zenml import pipeline from datetime import datetime @pipeline() def my_pipeline(...): ... # Using cron expression schedule = Schedule(cron_expression="5 14 * * 3") # Using human-readable notation schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) my_pipeline() ``` For more scheduling options, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule The method to pause or stop a scheduled run varies by orchestrator. For example, in Kubeflow, use the UI to manage scheduled runs. Users are responsible for maintaining the lifecycle of the schedule, as ZenML only initiates the schedule. Running a pipeline with a schedule multiple times creates unique scheduled pipelines. #### Additional Resources For more information on orchestrators, visit the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/step-output-typing-and-annotation.md === ### Summary of Step Output Typing and Annotation in ZenML **Step Outputs Storage**: Outputs are stored in an artifact store. Annotate and name them for clarity. #### Type Annotations - Type annotations are optional but beneficial: - **Type Validation**: Ensures correct input types from upstream steps. - **Better Serialization**: ZenML can select an appropriate materializer for outputs when types are annotated. Custom materializers can be created if built-in ones are inadequate. **Warning**: The built-in `CloudpickleMaterializer` can serialize any object but is not production-ready due to compatibility issues across Python versions and potential security risks from arbitrary code execution. #### Example Code ```python from typing import Tuple from zenml import step @step def square_root(number: int) -> float: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` - To enforce type annotations, set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True`. ZenML will raise exceptions for missing annotations. #### Tuple vs Multiple Outputs - A return statement with a tuple literal indicates multiple outputs. Otherwise, it is treated as a single output of type `Tuple`. #### Example Code for Outputs ```python from zenml import step from typing_extensions import Annotated from typing import Tuple @step def my_step() -> Tuple[int, int]: return (0, 1) @step def my_step() -> Annotated[Tuple[int, ...], "my_output"]: return 0, 1 @step def divide(a: int, b: int) -> Tuple[ Annotated[int, "quotient"], Annotated[int, "remainder"] ]: return a // b, a % b ``` #### Output Naming - Default output names are `output` for single outputs and `output_0`, `output_1`, etc., for multiple outputs. - Custom names can be assigned using `Annotated`. **Note**: If no custom names are provided, artifacts are named `{pipeline_name}::{step_name}::output`. ### Additional Resources - For more on output annotation: [Return Multiple Outputs from a Step](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md). - For custom data types: [Handle Custom Data Types](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === ### Caching Behavior in ZenML Pipelines By default, ZenML caches steps in pipelines when both the code and parameters remain unchanged. #### Step and Pipeline Caching Configuration - **Step Level Caching**: You can control caching behavior at the step level using the `@step` decorator. ```python @step(enable_cache=True) # Caches data loading def load_data(parameter: int) -> dict: ... @step(enable_cache=False) # Overrides pipeline-level caching def train_model(data: dict) -> None: ... ``` - **Pipeline Level Caching**: Set caching behavior for the entire pipeline with the `@pipeline` decorator. ```python @pipeline(enable_cache=True) # Caches the pipeline def simple_ml_pipeline(parameter: int): ... ``` #### Modifying Caching Behavior Caching settings can be adjusted after initial configuration: ```python my_step.configure(enable_cache=...) my_pipeline.configure(enable_cache=...) ``` ### Important Note Caching occurs only when both the code and parameters are unchanged. For additional configuration options, refer to the [YAML configuration documentation](../../pipeline-development/use-configuration-files/). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/README.md === ### Summary of ZenML Pipeline Documentation **Building Pipelines**: Use the `@step` and `@pipeline` decorators to create a pipeline in ZenML. #### Example Code: ```python from zenml import pipeline, step @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): train_model(load_data()) ``` **Running the Pipeline**: Execute the pipeline by calling `simple_ml_pipeline()`. **Logging**: Pipeline runs are logged in the ZenML dashboard, which requires a ZenML server (local or remote). **Advanced Features**: Additional capabilities include: - Configuring pipeline/step parameters - Naming and annotating step outputs - Controlling caching behavior - Customizing step invocation IDs - Naming pipeline runs - Using failure/success hooks - Hyperparameter tuning - Attaching and fetching metadata within steps and during pipeline composition - Enabling/disabling log storage - Accessing secrets in a step For detailed instructions on these features, refer to the respective documentation links provided in the original text. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md === ### ZenML Server Environment Configuration The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md === ### Handling Conflicting Dependencies in ZenML This documentation addresses common issues with conflicting dependencies when using ZenML with other libraries. ZenML is designed to be integration-agnostic, allowing flexibility in pipeline execution, but this can lead to dependency conflicts. #### Installing Dependencies You can install integration-specific dependencies using: ```bash zenml integration install ... ``` To verify that all requirements are met after installing additional dependencies, run: ```bash zenml integration list ``` Look for a green tick symbol next to your desired integrations. #### Suggestions for Resolving Dependency Conflicts 1. **Use `pip-compile` for Reproducibility**: Use `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt` file. For `uv` users, consider using `uv pip compile`. Refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management) for practical examples. 2. **Use `pip check`**: Run `pip check` to verify compatibility of your environment's dependencies. It will list any conflicts, which may or may not affect your use case. 3. **Known Dependency Issues**: ZenML has strict version requirements for some packages. For instance, it requires `click~=8.0.3` for its CLI, and using a version greater than this may cause issues. #### Manual Dependency Installation You can manually install dependencies instead of using ZenML's integration installation, though this is not recommended. The command `zenml integration install ...` executes a `pip install` for the specified integration dependencies. To export integration requirements, use: ```bash # Export to a file zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME # Print to console zenml integration export-requirements INTEGRATION_NAME ``` You can modify these requirements as needed. If using a remote orchestrator, update the dependencies in a `DockerSettings` object to ensure proper functionality. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/README.md === # Configure Python Environments ZenML deployments involve multiple environments for managing dependencies and configurations. Below is a summary of the key environments: ## Client Environment (Runner Environment) - **Purpose**: Compiles ZenML pipelines, typically in a `run.py` script. - **Types**: - Local development - CI runner in production - ZenML Pro runner - `runner` image orchestrated by the ZenML server - **Dependency Management**: Use package managers like `pip` or `poetry` to install ZenML and required integrations. - **Key Steps**: 1. Compile pipeline via the `@pipeline` function. 2. Create/trigger pipeline and step build environments if running remotely. 3. Trigger a run in the orchestrator. - **Note**: The `@pipeline` function is called only in this environment, focusing on compile-time logic. ## ZenML Server Environment - **Description**: A FastAPI application managing pipelines and metadata, includes the ZenML Dashboard. - **Dependency Management**: Install dependencies during ZenML deployment, mainly for custom integrations. ## Execution Environments - **Local Execution**: No distinct execution environment; client, server, and execution are the same. - **Remote Execution**: ZenML transfers code and environment to the remote orchestrator using Docker images called execution environments. - **Image Configuration**: Built from a base image containing ZenML and Python, with additional pipeline dependencies. Refer to the [containerize your pipeline](../../../how-to/customize-docker-builds/README.md) guide for configuration details. ## Image Builder Environment - **Default Behavior**: Execution environments are created locally using the Docker client, requiring Docker installation. - **Image Builders**: ZenML provides image builders, a specialized stack component, to build and push Docker images in a separate image builder environment. - **Consistency**: If no image builder is configured, ZenML defaults to the local image builder, ensuring consistency across builds. This summary captures the essential technical details and configurations necessary for managing ZenML environments effectively. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md === ### Summary: Distributed Training with Hugging Face's Accelerate in ZenML ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing the use of multiple GPUs or nodes. #### Using 🤗 Accelerate in ZenML Steps To enable distributed execution in training steps, use the `run_with_accelerate` decorator: ```python from zenml import step, pipeline from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True) @step def training_step(some_param: int, ...): ... @pipeline def training_pipeline(some_param: int, ...): training_step(some_param, ...) ``` The decorator accepts arguments similar to the `accelerate launch` CLI command. For a complete list, refer to the [Accelerate CLI documentation](https://huggingface.co/docs/accelerate/en/package_reference/cli#accelerate-launch). #### Configuration Key arguments for `run_with_accelerate` include: - `num_processes`: Number of processes for training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. - `mixed_precision`: Set mixed precision mode ('no', 'fp16', 'bf16'). **Usage Notes:** 1. Use `@run_with_accelerate` directly on steps. 2. Use keyword arguments for step calls. 3. Misuse raises a `RuntimeError` with guidance. For a full example, see the [llm-lora-finetuning](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md) project. #### Container Configuration To run steps with Accelerate, ensure your environment is set up correctly: 1. **Specify a CUDA-enabled parent image:** ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Add Accelerate as a requirement:** ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) ``` #### Training Across Multiple GPUs ZenML supports training with multiple GPUs on single or multiple nodes. This involves: - Wrapping the training step with `run_with_accelerate`. - Configuring Accelerate arguments (e.g., `num_processes`, `multi_gpu`). - Ensuring training code is compatible with distributed training. For assistance, connect with ZenML support via [Slack](https://zenml.io/slack). By utilizing the Accelerate integration, you can efficiently scale training processes while maintaining ZenML's pipeline structure. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/README.md === ### Summary: Running Machine Learning Pipelines on GPU with ZenML #### Overview ZenML allows you to scale machine learning pipelines to the cloud, utilizing GPU-backed hardware for resource-intensive tasks. This requires configuring `ResourceSettings` and ensuring your container environment is CUDA-enabled. #### Specifying Resource Requirements To allocate resources for specific steps, use `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml import step @step(settings={"resources": ResourceSettings(cpu_count=8, gpu_count=2, memory="8GB")}) def training_step(...) -> ...: # train a model ``` For orchestrators like Skypilot that do not support `ResourceSettings` directly, use their specific settings: ```python from zenml import step from zenml.integrations.skypilot.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings(cpus="2", memory="16", accelerators="V100:2") @step(settings={"orchestrator": skypilot_settings}) def training_step(...) -> ...: # train a model ``` Refer to each orchestrator's documentation for specific resource support. #### CUDA-Enabled Container Configuration To utilize GPU capabilities, ensure your container is CUDA-enabled: 1. **Specify a CUDA-enabled parent image**: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Add ZenML as a pip requirement**: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["zenml==0.39.1", "torchvision"] ) ``` Ensure compatibility between local and remote environments regarding CUDA versions. #### Resetting CUDA Cache Resetting the CUDA cache can prevent issues during GPU-intensive tasks. Use the following function at the beginning of GPU-enabled steps: ```python import gc import torch def cleanup_memory() -> None: while gc.collect(): torch.cuda.empty_cache() @step def training_step(...): cleanup_memory() # train a model ``` #### Multi-GPU Training ZenML supports training across multiple GPUs on a single node. To implement this: - Create a script/function for model training that runs in parallel across GPUs. - Call this function from within the ZenML step. For assistance, connect with the ZenML community on Slack. ### Conclusion By following these guidelines, you can effectively run machine learning pipelines on GPU hardware using ZenML, ensuring optimal resource allocation and performance. ================================================== === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === ### Summary: Creating an External Integration in ZenML ZenML aims to streamline the MLOps landscape by providing numerous integrations with popular tools and allowing users to implement custom stack components. This guide outlines how to contribute your integration to the ZenML codebase. #### Step 1: Plan Your Integration Identify the categories your integration fits into by referring to the ZenML categories [here](../../component-guide/README.md). Note that one integration can belong to multiple categories. #### Step 2: Create Stack Component Flavors Develop individual stack component flavors corresponding to the selected categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` Ensure ZenML is initialized at the root of your repository for proper resolution. #### Step 3: Create an Integration Class 1. **Clone the Repo**: Clone the [ZenML repository](https://github.com/zenml-io/zenml) and set up your local environment. 2. **Create Integration Directory**: Under `src/zenml/integrations/`, create a folder for your integration with a structure like: ``` /src/zenml/integrations/ / ├── artifact-stores/ ├── flavors/ └── __init__.py ``` 3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: ```python EXAMPLE_INTEGRATION = "" ``` 4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, subclass `Integration`: ```python from zenml.integrations.constants import from zenml.integrations.integration import Integration from zenml.stack import Flavor class ExampleIntegration(Integration): NAME = REQUIREMENTS = [""] @classmethod def flavors(cls) -> List[Type[Flavor]]: from zenml.integrations. import return [] ExampleIntegration.check_installation() ``` Refer to the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for guidance. 5. **Import Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. #### Step 4: Create a PR Submit a [PR](https://github.com/zenml-io/zenml/compare) to ZenML for review by core maintainers. This guide provides the essential steps to create and contribute an integration to ZenML, enhancing its extensibility in the MLOps ecosystem. ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === # Contributing to ZenML Thank you for considering contributing to ZenML! ## How to Contribute We welcome contributions such as new features, documentation improvements, integrations, or bug reports. For detailed guidelines on contributing, including adding custom integrations, please refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md === ### ZenML Server Upgrade Guide This guide outlines how to upgrade your ZenML server based on different deployment methods. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. #### General Recommendations - Upgrade promptly after a new version release to benefit from improvements and fixes. - Ensure data persistence (via persistent storage or an external MySQL instance) before upgrading. Backup is recommended. ### Upgrade Methods #### Docker 1. **Delete Existing Container:** ```bash docker ps # Find your container ID docker stop docker rm ``` 2. **Deploy New Version:** ```bash docker run -it -d -p 8080:8080 --name zenmldocker/zenml-server: ``` #### Kubernetes with Helm 1. **In-Place Upgrade (no config changes):** ```bash helm -n upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version --reuse-values ``` 2. **Upgrade with Configuration Changes:** - Extract current values: ```bash helm -n get values zenml-server > custom-values.yaml ``` - Modify `custom-values.yaml` as needed. - Upgrade with modified values: ```bash helm -n upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version -f custom-values.yaml ``` > **Note:** Avoid changing the container image tag in the Helm chart unless necessary, as it may lead to compatibility issues. ### Important Warnings - **Downgrading**: Not supported and may cause unexpected behavior. - **Python Client Version**: Must match the server version. This summary provides essential steps and precautions for upgrading ZenML servers across different deployment methods. ================================================== === File: docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md === ### Best Practices for Upgrading ZenML #### Upgrading Your Server 1. **Data Backups** - **Database Backup**: Create a backup of your MySQL database before upgrading to allow rollback if needed. - **Automated Backups**: Set up daily automated backups using managed services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. 2. **Upgrade Strategies** - **Staged Upgrade**: Use two ZenML server instances (old and new) for gradual migration of services. - **Team Coordination**: Coordinate upgrade timing among teams to minimize disruption. - **Separate ZenML Servers**: Consider dedicated servers for teams needing different upgrade schedules. ZenML Pro supports multi-tenancy for this purpose. 3. **Minimizing Downtime** - **Upgrade Timing**: Schedule upgrades during low-activity periods. - **Avoid Mid-Pipeline Upgrades**: Be cautious of upgrades that may interrupt long-running pipelines. #### Upgrading Your Code 1. **Testing and Compatibility** - **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines to check compatibility. - **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. Refer to ZenML's [test suite](https://github.com/zenml-io/zenml/tree/main/tests). - **Artifact Compatibility**: Be cautious with pickle-based materializers. Test loading older artifacts with the new version: ```python from zenml.client import Client artifact = Client().get_artifact_version('YOUR_ARTIFACT_ID') loaded_artifact = artifact.load() ``` 2. **Dependency Management** - **Python Version**: Ensure compatibility of your Python version with the new ZenML version. Refer to the [installation guide](../../getting-started/installation.md). - **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version by reviewing the [release notes](https://github.com/zenml-io/zenml/releases). 3. **Handling API Changes** - **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for breaking changes and new instructions. - **Migration Scripts**: Utilize provided [migration scripts](migration-guide/migration-guide.md) for database schema changes. By adhering to these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server and code. Adapt these guidelines to fit your unique environment and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md === # Best Practices for Using ZenML Server in Production This guide outlines best practices for deploying ZenML server in production environments, focusing on scalability, performance, and monitoring. ## Autoscaling Replicas To handle larger, longer-running pipelines, enable autoscaling based on your deployment environment: ### Kubernetes with Helm Use the following configuration in your Helm chart: ```yaml autoscaling: enabled: true minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 80 ``` ### ECS (AWS) - Navigate to your ECS service. - Click "Update Service" and enable autoscaling, setting minimum and maximum tasks. ### Cloud Run (GCP) - Go to your Cloud Run service. - Click "Edit & Deploy new Revision" and set minimum and maximum instances. ### Docker Compose Scale your service with: ```bash docker compose up --scale zenml-server=N ``` ## High Connection Pool Values Increase server performance by adjusting thread pool size: ```yaml zenml: threadPoolSize: 100 ``` Ensure `zenml.database.poolSize` and `zenml.database.maxOverflow` are set to accommodate the thread pool size. ## Scaling the Backing Database Monitor and scale your database based on: - **CPU Utilization**: Scale if consistently above 50%. - **Freeable Memory**: Scale if below 100-200 MB. ## Setting Up Ingress/Load Balancer Securely expose your ZenML server: ### Kubernetes with Helm Enable ingress: ```yaml zenml: ingress: enabled: true className: "nginx" ``` ### ECS Use Application Load Balancers to route traffic. ### Cloud Run Utilize Cloud Load Balancing for traffic routing. ### Docker Compose Set up an NGINX server as a reverse proxy. ## Monitoring Implement monitoring to ensure smooth operation: ### Kubernetes with Helm Use Prometheus and Grafana: ```yaml sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) ``` ### ECS Leverage CloudWatch for metrics on CPU and memory utilization. ### Cloud Run Use Cloud Monitoring for metrics on container performance. ## Backups Establish a backup strategy for critical data: - Automated backups with a 30-day retention. - Periodic exports to external storage (e.g., S3, GCS). - Manual backups before server upgrades. This summary encapsulates key practices for deploying and managing ZenML server in production environments, ensuring scalability, performance, and data integrity. ================================================== === File: docs/book/how-to/manage-zenml-server/README.md === # Manage Your ZenML Server This section provides best practices for upgrading your ZenML server, using it in production, and troubleshooting. It includes recommended upgrade steps and migration guides for transitioning between versions. ### Key Points: - **Upgrading ZenML Server**: Follow the recommended steps for a smooth upgrade process. - **Production Use**: Tips for effectively utilizing ZenML in a production environment. - **Troubleshooting**: Guidance on resolving common issues. - **Migration Guides**: Instructions for moving between specific versions of ZenML. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === # Troubleshooting ZenML Deployment ## Viewing Logs To debug issues in your ZenML deployment, start by analyzing logs based on your deployment method. ### Kubernetes 1. Check running pods: ```bash kubectl -n get pods ``` 2. If pods aren't running, get logs for all pods: ```bash kubectl -n logs -l app.kubernetes.io/name=zenml ``` 3. For specific container logs (either `zenml-db-init` or `zenml`): ```bash kubectl -n logs -l app.kubernetes.io/name=zenml -c ``` Use `--tail` to limit lines or `--follow` for real-time logs. ### Docker 1. For CLI deployment: ```shell zenml logs -f ``` 2. For manual `docker run`: ```shell docker logs zenml -f ``` 3. For `docker compose`: ```shell docker compose -p zenml logs -f ``` ## Fixing Database Connection Problems Common MySQL connection issues can be diagnosed from `zenml-db-init` logs: - **Access Denied**: Check username/password. - **Can't Connect**: Verify host settings. Test connection: ```bash mysql -h -u -p ``` For Kubernetes, use `kubectl port-forward` to connect to MySQL locally. ## Fixing Database Initialization Problems If migrating from a newer to an older ZenML version results in `Revision not found` errors, recreate the database: 1. Log in to MySQL: ```bash mysql -h -u -p ``` 2. Drop the existing database: ```sql drop database ; ``` 3. Create a new database: ```sql create database ; ``` 4. Restart your Kubernetes pods or Docker container to reinitialize the database. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-an-api-token.md === ### ZenML API Token Authentication **Overview**: API tokens allow temporary authentication with the ZenML server for automation tasks, valid for up to 1 hour and scoped to your user account. #### Generating an API Token To create a new API token: 1. Go to the server's Settings page in your ZenML dashboard. 2. Select "API Tokens" from the left sidebar. 3. Click "Create new token." A dialog will display your new API token upon generation. #### Programmatic Access Use the generated API tokens for programmatic access to the ZenML server's REST API. This method is ideal for quick access without using the ZenML CLI or Python client. For detailed instructions, refer to the [API reference section](../../../reference/api-reference.md#using-a-short-lived-api-token). ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md === ### ZenML Server Connection Guide **Overview**: Authenticate with the ZenML Server using the ZenML CLI and web-based login. **Login Command**: ```bash zenml login https://... ``` This command initiates a browser-based validation process. You can choose to trust your device: - **Trust this device**: Issues a 30-day token. - **Do not trust**: Issues a 24-hour token. **Note**: Device management for ZenML Pro tenants is not yet supported. **Manage Authorized Devices**: - List all authorized devices: ```bash zenml authorized-device list ``` - Describe a specific device: ```bash zenml authorized-device describe ``` - Invalidate a token: ```bash zenml authorized-device lock ``` **Steps to Connect**: 1. Execute `zenml login `. 2. Decide whether to trust the device. 3. List permitted devices with `zenml authorized-device list`. 4. Lock a device token if needed with `zenml authorized-device lock `. **Security Reminder**: Regularly manage device trust levels to maintain security. Lock devices immediately if trust needs to be revoked, as each token can access sensitive data. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md === ### Summary of ZenML Service Account and API Key Authentication To connect to a ZenML server from non-interactive environments (e.g., CI/CD, serverless functions), you can use a service account and an API key. #### Creating a Service Account and API Key Use the following command to create a service account and generate an API key: ```bash zenml service-account create ``` The API key is displayed in the output and cannot be retrieved later. #### Connecting to ZenML Server You can authenticate using the API key via: 1. **CLI Prompt:** ```bash zenml login https://... --api-key ``` 2. **Environment Variables:** ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` After setting these variables, you can interact with the server without needing to run `zenml login`. #### Managing Service Accounts and API Keys - List service accounts: ```bash zenml service-account list ``` - List API keys for a service account: ```bash zenml service-account api-key list ``` - Describe a service account or API key: ```bash zenml service-account describe zenml service-account api-key describe ``` #### Rotating API Keys API keys do not expire but should be rotated regularly for security. To rotate an API key: ```bash zenml service-account api-key rotate ``` To retain the old key for a specified period (e.g., 60 minutes): ```bash zenml service-account api-key rotate --retain 60 ``` #### Deactivating Service Accounts or API Keys To deactivate a service account or API key: ```bash zenml service-account update --active false zenml service-account api-key update --active false ``` #### Programmatic Access You can use the API key to obtain short-lived tokens for secure programmatic access to the ZenML REST API. This method is detailed in the API reference section. #### Security Notice Regularly rotate API keys and deactivate/delete unused service accounts and keys to protect your data and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md === ### Connecting to ZenML After deploying ZenML, users can connect to it through various methods. For detailed deployment instructions, refer to the [production guide](../../../user-guide/production-guide/deploying-zenml.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === ### Migration Guide: ZenML 0.13.2 to 0.20.0 **Last Updated:** 2023-07-24 ZenML 0.20.0 introduces significant architectural changes that may require modifications to existing stacks and pipelines. Follow this guide for a smooth migration. #### Key Changes: - **Metadata Store:** ZenML now manages its own Metadata Store, eliminating the need for separate implementations. If using remote Metadata Stores, migrate to a ZenML server deployment. - **ZenML Dashboard:** A new dashboard is available for all deployments. - **Profiles Removed:** ZenML Profiles are replaced by Projects. Existing Profiles must be manually migrated. - **Decoupled Configuration:** Stack Component configuration is now separate from implementation. Custom components may need updates. - **Collaboration Features:** The new ZenML server allows sharing of stacks and components among users. #### Migration Steps: 1. **Backup Existing Metadata:** Before upgrading, back up your metadata stores. 2. **Upgrade ZenML:** Use `pip install zenml==0.20.0`. 3. **Connect to ZenML Server:** If using a server, run `zenml connect`. 4. **Migrate Pipeline Runs:** - For SQLite: ```bash zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db ``` - For other databases (MySQL): ```bash zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD ``` #### ZenML Server Commands: - **Deploy Server:** `zenml deploy --aws` - **Start Local Server:** `zenml up` - **Check Server Status:** `zenml status` #### Dashboard Access: Launch the dashboard with: ```bash zenml up ``` Access it at `http://localhost:8237`. #### Profile Migration: 1. Update ZenML to 0.20.0 (Profiles will be invalidated). 2. Use: ```bash zenml profile list zenml profile migrate PATH_TO_PROFILE ``` to migrate stacks and components. #### Configuration Changes: - **Rename Classes:** - `Repository` → `Client` - `BaseStepConfig` → `BaseParameters` - **New Configuration Method:** Use `BaseSettings` for pipeline configurations. - **Deprecation of `enable_xxx` Decorators:** Replace with direct settings in the step decorator. #### Example Migration: For a step: ```python @step( experiment_tracker="mlflow_stack_comp_name", settings={ "experiment_tracker.mlflow": { "experiment_name": "name", "nested": False } } ) ``` #### Future Changes: - Potential removal of the secrets manager from the stack. - Possible deprecation of `StepContext`. #### Reporting Issues: For bugs or feature requests, contact the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). This guide provides a concise overview of the migration process and key changes in ZenML 0.20.0. Ensure to follow the steps carefully for a successful transition. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === # ZenML Migration Guide Summary ## Overview Migrations are required for ZenML releases with breaking changes, specifically for minor version increments (e.g., `0.X` to `0.Y`) and major version increments (e.g., `0.1.X` to `0.2.X`). ### Release Type Examples - **No Breaking Changes**: `0.40.2` to `0.40.3` - No migration needed. - **Minor Breaking Changes**: `0.40.3` to `0.41.0` - Migration required. - **Major Breaking Changes**: `0.39.1` to `0.40.0` - Significant changes in code usage. ## Major Migration Guides Follow these guides sequentially for major version upgrades: - [0.13.2 → 0.20.0](migration-zero-twenty.md) - [0.23.0 → 0.30.0](migration-zero-thirty.md) - [0.39.1 → 0.41.0](migration-zero-forty.md) - [0.58.2 → 0.60.0](migration-zero-sixty.md) ## Release Notes For minor breaking changes (e.g., `0.40.3` to `0.41.0`), refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === ### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2) #### Overview ZenML has upgraded to Pydantic v2, introducing critical updates that may affect user experience due to stricter validation processes. Users may encounter new validation errors. For issues, contact us on [GitHub](https://github.com/zenml-io/zenml) or [Slack](https://zenml.io/slack-invite). #### Key Dependency Changes - **SQLModel**: Upgraded from `0.0.8` to `0.0.18` to ensure compatibility with Pydantic v2. - **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should refer to the [migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). #### Pydantic v2 Features - Enhanced performance using Rust. - New features in model design, configuration, validation, and serialization. Refer to the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/) for details. #### Integration Changes - **Airflow**: Removed dependencies due to incompatibility with SQLAlchemy v1. Use ZenML to create Airflow pipelines in a separate environment. - **AWS**: Updated `sagemaker` to version `2.172.0` to support `protobuf` 4. - **Evidently**: Updated to support Pydantic v2 starting from version `0.4.16`. - **Feast**: Removed extra `redis` dependency for compatibility with Pydantic v2. - **GCP/Kubeflow**: Upgraded `kfp` dependency to v2, which has no Pydantic dependencies. Expect functional changes in the vertex step operator. - **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 compatibility. - **MLflow**: Compatible with both Pydantic versions but may downgrade to v1 due to installation issues. Watch for deprecation warnings. - **Label Studio**: Updated to support Pydantic v2 with the release of version 1.0. - **Skypilot**: Incompatibility with `azurecli` prevents installation of `skypilot[azure]`. Users should stay on the previous ZenML version until resolved. - **Tensorflow**: Requires `tensorflow>=2.12.0` due to dependency updates. Issues may arise on Python 3.8; consider using a higher Python version. - **Tekton**: Updated to use `kfp` v2 for compatibility with Pydantic v2. #### Important Note Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with previously incompatible integrations. It is recommended to set up a fresh Python environment for the upgrade. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md === ### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 **Important Note:** Migrating to ZenML `0.30.0` involves non-reversible database changes. Downgrading to `<=0.23.0` post-migration is not possible. If using an older version, first follow the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid database migration issues. **Key Changes in ZenML 0.30.0:** - Removed `ml-pipelines-sdk` dependency. - Pipeline runs and artifacts are now stored natively in the ZenML database. **Migration Steps:** 1. Install ZenML `0.30.0`: ```bash pip install zenml==0.30.0 ``` 2. Verify installation: ```bash zenml version # Should return 0.30.0 ``` 3. The database migration will occur automatically upon executing any `zenml ...` CLI command after installation. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md === ### Migration Guide: ZenML 0.39.1 to 0.41.0 ZenML versions 0.40.0 to 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future releases. #### Overview **Old Syntax Example:** ```python from zenml.steps import BaseParameters, Output, StepContext, step from zenml.pipelines import pipeline class MyStepParameters(BaseParameters): param_1: int param_2: Optional[float] = None @step def my_step(params: MyStepParameters, context: StepContext) -> Output(int_output=int, str_output=str): result = int(params.param_1 * (params.param_2 or 1)) result_uri = context.get_output_artifact_uri() return result, result_uri @pipeline def my_pipeline(my_step): my_step() step_instance = my_step(params=MyStepParameters(param_1=17)) pipeline_instance = my_pipeline(my_step=step_instance) pipeline_instance.run(schedule=schedule) ``` **New Syntax Example:** ```python from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step @step def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: result = int(param_1 * (param_2 or 1)) result_uri = get_step_context().get_output_artifact_uri() return result, result_uri @pipeline def my_pipeline(): my_step(param_1=17) my_pipeline = my_pipeline.with_options(enable_cache=False) my_pipeline() ``` #### Key Changes 1. **Defining Steps:** - Old: Use `BaseParameters` for parameters. - New: Define parameters directly in the step function or use `pydantic.BaseModel`. 2. **Calling Steps:** - Old: Use `my_step.entrypoint()`. - New: Call the step directly with `my_step()`. 3. **Defining Pipelines:** - Old: Steps are arguments in the pipeline function. - New: The pipeline function calls steps directly. 4. **Configuring Pipelines:** - Old: Use `pipeline_instance.configure(...)`. - New: Use `with_options(...)` method. 5. **Running Pipelines:** - Old: Create an instance and call `run(...)`. - New: Call the pipeline directly. 6. **Scheduling Pipelines:** - Old: Set schedule after instance creation. - New: Use `with_options(...)` to set schedule. 7. **Fetching Pipeline Runs:** - Old: Access last run via `pipeline.get_runs()`. - New: Use `pipeline.last_run`. 8. **Controlling Execution Order:** - Old: Use `step.after(...)`. - New: Pass `after` argument when calling a step. 9. **Defining Steps with Multiple Outputs:** - Old: Use `Output` class. - New: Use `Tuple` with optional custom output names. 10. **Accessing Run Information:** - Old: Pass `StepContext` as an argument. - New: Use `get_step_context()` to access run info. This guide provides a concise overview of the migration from ZenML 0.39.1 to 0.41.0, focusing on the syntax changes and how to adapt your existing pipelines and steps to the new format. For further details, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/configuring-zenml/configuring-zenml.md === ### Configuring ZenML's Default Behavior This guide outlines methods to configure ZenML's behavior in various situations. Users can adapt specific aspects of ZenML to suit their needs. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/popular-integrations/skypilot.md === ### Summary of Skypilot with ZenML Documentation **Overview:** The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and high GPU availability. **Prerequisites:** - Install ZenML SkyPilot integration for your cloud provider: ```bash zenml integration install skypilot_ ``` - Ensure Docker is installed and running. - Set up a remote artifact store and container registry in your ZenML stack. - Have a remote ZenML deployment. - Obtain permissions to provision VMs on your cloud provider. - Configure a service connector for authentication (not required for Lambda Labs). **Configuration Steps:** *For AWS, GCP, Azure:* 1. Install SkyPilot integration and connectors. 2. Register a service connector with necessary credentials. 3. Register the orchestrator and connect it to the service connector. 4. Register and activate a stack with the orchestrator. ```bash zenml service-connector register -skypilot-vm -t --auto-configure zenml orchestrator register --flavor vm_ zenml orchestrator connect --connector -skypilot-vm zenml stack register -o ... --set ``` *For Lambda Labs:* 1. Install SkyPilot Lambda integration. 2. Register a secret with your Lambda Labs API key. 3. Register the orchestrator using the API key secret. 4. Register and activate a stack with the orchestrator. ```bash zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} zenml stack register -o ... --set ``` **Running a Pipeline:** Once configured, run any ZenML pipeline with the SkyPilot VM Orchestrator, where each step executes in a Docker container on a provisioned VM. **Additional Configuration:** You can customize the orchestrator using cloud-specific `Settings` objects to specify VM size, spot usage, and region. ```python from zenml.integrations.skypilot_.flavors.skypilot_orchestrator__vm_flavor import SkypilotOrchestratorSettings skypilot_settings = SkypilotOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region=, ) @pipeline(settings={"orchestrator": skypilot_settings}) ``` You can also configure resources per step: ```python @step(settings={"orchestrator": SkypilotOrchestratorSettings(...)}) def resource_intensive_step(): ... ``` For more details, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === ### Summary: Deploying ZenML Pipelines on Kubernetes The ZenML Kubernetes Orchestrator enables running ML pipelines on a Kubernetes cluster without requiring Kubernetes code. It serves as a lightweight alternative to more complex orchestrators like Airflow or Kubeflow. #### Prerequisites To use the Kubernetes Orchestrator, ensure you have: - ZenML `kubernetes` integration installed (`zenml integration install kubernetes`) - Docker installed and running - `kubectl` installed - A remote artifact store and container registry in your ZenML stack - A deployed Kubernetes cluster - (Optional) A configured `kubectl` context for the cluster #### Deploying the Orchestrator You need a Kubernetes cluster to run the orchestrator. Various deployment methods exist, which can be explored in the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md). #### Configuring the Orchestrator You can configure the orchestrator in two ways: 1. **Using a Service Connector**: ```bash zenml orchestrator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Using `kubectl` Context**: ```bash zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` #### Running a Pipeline To run a ZenML pipeline with the Kubernetes Orchestrator: ```bash python your_pipeline.py ``` This command creates a Kubernetes pod for each pipeline step. You can manage the pods using `kubectl` commands. For advanced configurations, refer to the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). ================================================== === File: docs/book/how-to/popular-integrations/gcp-guide.md === # Minimal GCP Stack Setup Guide This guide outlines the steps to quickly set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ## Steps to Set Up ### 1. Choose a GCP Project Select or create a GCP project in the console. Ensure a billing account is attached. ```bash gcloud projects create --billing-project= ``` ### 2. Enable GCloud APIs Enable the following APIs in your GCP project: - Cloud Functions API - Cloud Run Admin API - Cloud Build API - Artifact Registry API - Cloud Logging API ### 3. Create a Dedicated Service Account Create a service account with the following roles: - AI Platform Service Agent - Storage Object Admin ### 4. Create a JSON Key for the Service Account Generate a JSON key for the service account. ```bash export JSON_KEY_FILE_PATH= ``` ### 5. Create a Service Connector in ZenML Authenticate ZenML with GCP using the service account. ```bash zenml integration install gcp \ && zenml service-connector register gcp_connector \ --type gcp \ --auth-method service-account \ --service_account_json=@${JSON_KEY_FILE_PATH} \ --project_id= ``` ### 6. Create Stack Components #### Artifact Store Create a GCS bucket and register it as an artifact store. ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp \ --path=gs:// zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` #### Orchestrator Register Vertex AI as the orchestrator. ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex \ --project= --location=europe-west2 zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` #### Container Registry Register a GCP container registry. ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri= zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` ### 7. Create Stack Register the stack with the created components. ```bash export STACK_NAME=gcp_stack zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \ -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ## Cleanup To delete the project and all associated resources: ```bash gcloud project delete ``` ## Best Practices 1. **Use IAM and Least Privilege Principle**: Grant only necessary permissions and regularly audit IAM roles. 2. **Leverage GCP Resource Labeling**: Implement a labeling strategy for resource management. ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production ``` 3. **Implement Cost Management Strategies**: Use GCP's Cost Management tools to monitor spending and set budget alerts. ```bash gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-name="ZenML Monthly Budget" --budget-amount=1000 --threshold-rule=percent=90 ``` 4. **Implement a Robust Backup Strategy**: Regularly back up critical data and enable versioning on GCS. ```bash gsutil versioning set on gs://your-bucket-name ``` By following these steps and best practices, you can efficiently set up and manage a GCP stack for ZenML. ================================================== === File: docs/book/how-to/popular-integrations/azure-guide.md === # Azure Stack Setup for ZenML Pipelines This guide provides a concise process to set up a minimal production stack on Azure for running ZenML pipelines. ## Prerequisites - Active Azure account - ZenML installed - ZenML Azure integration: `zenml integration install azure` ## Steps to Set Up Azure Stack ### 1. Create Service Principal 1. Navigate to Azure Portal > App Registrations. 2. Click `+ New registration`, name it, and register. 3. Note the Application ID and Tenant ID. 4. Under `Certificates & secrets`, create a client secret and note its value. ### 2. Create Resource Group and AzureML Instance 1. Go to Azure Portal > Resource Groups > `+ Create`. 2. After creating the resource group, click `+ Create` in the overview page. 3. Search for and select `Azure Machine Learning` to create an AzureML workspace. Consider creating a container registry. ### 3. Create Role Assignments 1. In the resource group, go to `Access control (IAM)` > `+ Add` role assignment. 2. Assign the following roles: `AzureML Compute Operator`, `AzureML Data Scientist`, `AzureML Registry User`. 3. Select your registered app by its ID for each role. ### 4. Create ZenML Azure Service Connector ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ --client_secret= \ --tenant_id= \ --client_id= ``` ### 5. Create Stack Components #### Artifact Store (Azure Blob Storage) 1. Create a container in your AzureML workspace's storage account. ```bash zenml artifact-store register azure_artifact_store -f azure \ --path= \ --connector azure_connector ``` #### Orchestrator (AzureML) ```bash zenml orchestrator register azure_orchestrator -f azureml \ --subscription_id= \ --resource_group= \ --workspace= \ --connector azure_connector ``` #### Container Registry (Azure Container Registry) ```bash zenml container-registry register azure_container_registry -f azure \ --uri= \ --connector azure_connector ``` ### 6. Create ZenML Stack ```shell zenml stack register azure_stack \ -o azure_orchestrator \ -a azure_artifact_store \ -c azure_container_registry \ --set ``` ### 7. Run a ZenML Pipeline Define a simple pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from Azure!" @pipeline def azure_pipeline(): hello_world() if __name__ == "__main__": azure_pipeline() ``` Save as `run.py` and execute: ```shell python run.py ``` ## Next Steps - Explore ZenML's [production guide](../../user-guide/production-guide/README.md) for best practices. - Check ZenML's [integrations](../../component-guide/README.md) with other tools. - Join the [ZenML community](https://zenml.io/slack) for support and networking. ================================================== === File: docs/book/how-to/popular-integrations/kubeflow.md === ### Summary of Kubeflow Orchestrator Documentation **Overview**: The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without writing Kubeflow-specific code. #### Prerequisites - Install ZenML `kubeflow` integration: `zenml integration install kubeflow` - Ensure Docker is installed and running. - `kubectl` is optional. - Have a Kubernetes cluster with Kubeflow Pipelines installed. - Set up a remote artifact store and container registry in your ZenML stack. - Deploy a remote ZenML server in the cloud. - (Optional) Name of your Kubernetes context pointing to the remote cluster. #### Configuring the Orchestrator Two configuration methods: 1. **Using Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubeflow zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack update -o ``` 2. **Using `kubectl`**: ```bash zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack update -o ``` #### Running a Pipeline To run a ZenML pipeline: ```bash python your_pipeline.py ``` This creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. #### Additional Configuration Configure the orchestrator with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={ "affinity": {...}, "tolerations": [...] } ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` #### Multi-Tenancy Deployments For multi-tenant setups, register the orchestrator with: ```bash zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Provide credentials in the settings: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="admin", client_password="abc123", user_namespace="namespace_name" ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` For further details, refer to the [full Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). ================================================== === File: docs/book/how-to/popular-integrations/aws-guide.md === # Summary: Setting Up an AWS Stack for ZenML Pipelines ## Overview This guide provides a streamlined process to create a minimal production stack on AWS for running ZenML pipelines. It includes steps for setting up IAM roles, service connectors, and stack components like S3, SageMaker, and ECR. ## Prerequisites - Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. - AWS CLI configured with credentials. ## Steps to Set Up AWS Stack ### 1. Set Up Credentials and Local Environment 1. **Choose AWS Region**: Select the region for deployment (e.g., `us-east-1`). 2. **Create IAM Role**: - Get your AWS account ID: ```shell aws sts get-caller-identity --query Account --output text ``` - Create `assume-role-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam:::root", "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` - Create the IAM role: ```shell aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json ``` - Attach necessary policies: ```shell aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` 3. **Install ZenML Integrations**: ```shell zenml integration install aws s3 -y ``` ### 2. Create a Service Connector Register an AWS Service Connector in ZenML: ```shell zenml service-connector register aws_connector \ --type aws \ --auth-method iam-role \ --role_arn= \ --region= \ --aws_access_key_id= \ --aws_secret_access_key= ``` ### 3. Create Stack Components #### Artifact Store (S3) 1. Create an S3 bucket: ```shell aws s3api create-bucket --bucket your-bucket-name ``` 2. Register the S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://your-bucket-name --connector aws_connector ``` #### Orchestrator (SageMaker Pipelines) 1. Create a SageMaker domain (if needed). 2. Register the SageMaker orchestrator: ```shell zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region= --execution_role= ``` #### Container Registry (ECR) 1. Create an ECR repository: ```shell aws ecr create-repository --repository-name zenml --region ``` 2. Register the ECR container registry: ```shell zenml container-registry register ecr-registry --flavor=aws --uri=.dkr.ecr..amazonaws.com --connector aws-connector ``` ### 4. Create the Stack ```shell export STACK_NAME=aws_stack zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ### 5. Run a Pipeline Define and run a simple ZenML pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from SageMaker!" @pipeline def aws_sagemaker_pipeline(): hello_world() if __name__ == "__main__": aws_sagemaker_pipeline() ``` Execute: ```shell python run.py ``` ## Cleanup To avoid charges, delete resources: ```shell aws s3 rm s3://your-bucket-name --recursive aws s3api delete-bucket --bucket your-bucket-name aws sagemaker delete-domain --domain-id aws ecr delete-repository --repository-name zenml --force aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess aws iam delete-role --role-name zenml-role ``` ## Conclusion This guide outlines the steps to set up an AWS stack for ZenML, enabling scalable and efficient machine learning workflows. Key benefits include scalability, reproducibility, collaboration, and flexibility. ## Best Practices - Use IAM roles with the least privilege principle. - Implement consistent tagging for AWS resources. - Monitor costs using AWS Cost Explorer and Budgets. - Consider using Warm Pools for SageMaker to reduce startup times. - Regularly back up critical data and configurations. ================================================== === File: docs/book/how-to/popular-integrations/mlflow.md === # MLflow Experiment Tracker with ZenML ## Overview The ZenML MLflow Experiment Tracker integration allows logging and visualizing pipeline step information using MLflow without additional code. ## Prerequisites - Install ZenML MLflow integration: ```bash zenml integration install mlflow -y ``` - MLflow deployment: local or remote with proxied artifact storage. ## Configuring the Experiment Tracker ### 1. Local Deployment For local artifact storage: ```bash zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` ### 2. Remote Deployment Requires authentication: - Basic authentication (not recommended) - ZenML secrets (recommended) To create ZenML secrets: ```bash zenml secret create mlflow_secret --username= --password= ``` Register the experiment tracker: ```bash zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` ## Using the Experiment Tracker Enable the experiment tracker in a pipeline step: ```python import mlflow @step(experiment_tracker="") def train_step(...): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) ``` ## Viewing Results Retrieve the MLflow experiment URL for a ZenML run: ```python last_run = client.get_pipeline("").last_run trainer_step = last_run.get_step("") tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value ``` ## Additional Configuration Customize the experiment tracker using `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step( experiment_tracker="", settings={"experiment_tracker": mlflow_settings} ) ``` For more details, refer to the [full MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). ================================================== === File: docs/book/how-to/popular-integrations/README.md === # ZenML Integrations Guide ZenML integrates with popular tools in the data science and machine learning ecosystem. This guide outlines how to seamlessly connect ZenML with these tools. ## Key Points - ZenML is designed for compatibility with various data science and ML tools. - The integration process is straightforward, enhancing workflow efficiency. For further details on specific integrations, refer to the respective sections in the documentation. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-python.md === ### ZenML Template Creation and Execution Guide **Feature Note:** This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Creating a Template To create a run template using the ZenML client, ensure you select a pipeline run executed on a remote stack: ```python from zenml.client import Client run = Client().get_pipeline_run() Client().create_run_template(name=, deployment_id=run.deployment_id) ``` Alternatively, create a template directly from your pipeline definition with a remote stack active: ```python from zenml import pipeline @pipeline def my_pipeline(): ... template = my_pipeline.create_run_template(name=) ``` #### Running a Template To run a created template: ```python from zenml.client import Client template = Client().get_run_template() config = template.config_template # [OPTIONAL] Modify the config here Client().trigger_pipeline(template_id=template.id, run_configuration=config) ``` This will execute a new run on the same stack as the original. #### Advanced Usage: Running a Template from Another Pipeline You can trigger a pipeline from within another pipeline using the following structure: ```python import pandas as pd from zenml import pipeline, step from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml.artifacts.utils import load_artifact from zenml.client import Client from zenml.config.pipeline_run_configuration import PipelineRunConfiguration @step def trainer(data_artifact_id: str): df = load_artifact(data_artifact_id) @pipeline def training_pipeline(): trainer() @step def load_data() -> pd.DataFrame: ... @step def trigger_pipeline(df: UnmaterializedArtifact): run_config = PipelineRunConfiguration( steps={"trainer": {"parameters": {"data_artifact_id": df.id}}} ) Client().trigger_pipeline("training_pipeline", run_configuration=run_config) @pipeline def loads_data_and_triggers_training(): df = load_data() trigger_pipeline(df) # Triggers the training pipeline ``` For further details, refer to the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) documentation, as well as information on Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-cli.md === ### ZenML CLI: Create a Template **Feature**: This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. **Command**: Use the ZenML CLI to create a run template with the following syntax: ```bash zenml pipeline create-run-template --name= ``` - ``: Use `run.my_pipeline` if the pipeline is defined in `run.py`. **Requirements**: Ensure you have an active **remote stack** or specify one using the `--stack` option. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === ### ZenML Dashboard Template Management **Feature Access**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. #### Creating a Template 1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). 2. Click `+ New Template`, provide a name, and click `Create`. #### Running a Template - To run a template: - Click `Run a Pipeline` on the main `Pipelines` page, or - Go to a specific template page and select `Run Template`. You will be directed to the `Run Details` page, where you can: - Upload a `.yaml` configuration file or - Modify configurations using the editor. Upon execution, the template runs on the same stack as the original pipeline. ================================================== === File: docs/book/how-to/trigger-pipelines/README.md === ### Triggering a Pipeline in ZenML In ZenML, a pipeline can be triggered using a simple function decorated with `@pipeline`. Below is a concise example of how to define and run a basic machine learning pipeline: ```python from zenml import step, pipeline @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}.") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) if __name__ == "__main__": simple_ml_pipeline() ``` ### Run Templates **Run Templates** are pre-defined, parameterized configurations for ZenML pipelines, allowing for easy execution from the ZenML dashboard or via the Client/REST API. They serve as customizable blueprints for pipeline runs. **Note:** Run Templates are a feature exclusive to ZenML Pro users. For access, sign up [here](https://cloud.zenml.io). ### Additional Resources - Use templates: [Python SDK](use-templates-python.md) - Use templates: [CLI](use-templates-cli.md) - Use templates: [Dashboard](use-templates-dashboard.md) - Use templates: [REST API](use-templates-rest-api.md) ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md === ### ZenML REST API: Creating and Running a Template **Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Triggering a Pipeline via REST API To trigger a pipeline, you must first create a run template for it. The process involves three API calls: 1. **Get Pipeline ID** - Endpoint: `GET /pipelines?name=` - Response includes ``. 2. **Get Template ID** - Endpoint: `GET /run_templates?pipeline_id=` - Response includes ``. 3. **Run the Pipeline** - Endpoint: `POST /run_templates//runs` - Include `PipelineRunConfiguration` in the request body. #### Example Workflow To re-run a pipeline named `training`, follow these steps: 1. **Get Pipeline ID** ```shell curl -X 'GET' \ '/api/v1/pipelines?hydrate=false&name=training' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` Extract `` from the response (e.g., `c953985e-650a-4cbf-a03a-e49463f58473`). 2. **Get Template ID** ```shell curl -X 'GET' \ '/api/v1/run_templates?hydrate=false&pipeline_id=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` Extract `` from the response (e.g., `b826b714-a9b3-461c-9a6e-1bde3df3241d`). 3. **Trigger the Pipeline** ```shell curl -X 'POST' \ '/api/v1/run_templates//runs' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} }' ``` A successful response indicates that the pipeline has been re-triggered with the specified configuration. For more details on obtaining a bearer token, refer to the [API reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). ================================================== === File: docs/book/how-to/infrastructure-deployment/README.md === # Infrastructure and Deployment Summary This section details the infrastructure setup and deployment processes in ZenML. Key components include: 1. **Infrastructure Requirements**: - ZenML supports various cloud providers (AWS, GCP, Azure) and local environments. - Ensure necessary permissions and configurations are in place for resource access. 2. **Deployment Strategies**: - **Local Deployment**: Ideal for development and testing. Use Docker for containerization. - **Cloud Deployment**: Leverage cloud services for scalability. Use managed services like AWS SageMaker or GCP AI Platform. 3. **Configuration**: - Use a configuration file (e.g., `zenml.yaml`) to define pipeline components, data sources, and deployment settings. - Example configuration snippet: ```yaml pipeline: name: my_pipeline components: - name: data_loader type: DataLoader - name: model_trainer type: ModelTrainer ``` 4. **Environment Setup**: - Install ZenML using pip: ```bash pip install zenml ``` - Initialize a ZenML repository: ```bash zenml init ``` 5. **Deployment Commands**: - Deploy pipelines using: ```bash zenml deploy ``` - Monitor deployments through the ZenML dashboard or CLI. This summary encapsulates the essential information for setting up and deploying infrastructure in ZenML, ensuring clarity and conciseness. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/terraform-stack-management.md === ### Summary: Registering Existing Infrastructure with ZenML - A Guide for Terraform Users #### Overview This guide assists advanced users in integrating ZenML with existing Terraform setups, focusing on managing custom Terraform code. It outlines a two-phase approach: Infrastructure Deployment and ZenML Registration. #### Phase 1: Infrastructure Deployment Existing infrastructure is typically managed through Terraform configurations. Example for GCP: ```hcl resource "google_storage_bucket" "ml_artifacts" { name = "company-ml-artifacts" location = "US" } resource "google_artifact_registry_repository" "ml_containers" { repository_id = "ml-containers" format = "DOCKER" } ``` #### Phase 2: ZenML Registration ##### Setup the ZenML Provider Configure the ZenML provider to connect with your ZenML server: ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } } } provider "zenml" { # Load config from environment variables } ``` Generate an API key with: ```bash zenml service-account create ``` ##### Create Service Connectors Service connectors manage authentication: ```hcl resource "zenml_service_connector" "gcp_connector" { name = "gcp-${var.environment}-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id service_account_json = file("service-account.json") } } ``` ##### Register Stack Components Register various stack components: ```hcl locals { component_configs = { artifact_store = { type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } container_registry = { type = "container_registry" flavor = "gcp" configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } orchestrator = { type = "orchestrator" flavor = "vertex" configuration = { project = var.project_id, region = var.region } } } } resource "zenml_stack_component" "components" { for_each = local.component_configs name = "existing-${each.key}" type = each.value.type flavor = each.value.flavor configuration = each.value.configuration connector_id = zenml_service_connector.gcp_connector.id } ``` ##### Assemble the Stack Combine components into a stack: ```hcl resource "zenml_stack" "ml_stack" { name = "${var.environment}-ml-stack" components = { for k, v in zenml_stack_component.components : k => v.id } } ``` #### Practical Walkthrough: Registering Existing GCP Infrastructure **Prerequisites:** - GCS bucket for artifacts - Artifact Registry repository - Service account for ML operations - Vertex AI enabled **Configuration Steps:** 1. **Variables Configuration:** ```hcl variable "zenml_server_url" { type = string } variable "zenml_api_key" { type = string, sensitive = true } variable "project_id" { type = string } variable "region" { type = string, default = "us-central1" } variable "environment" { type = string } variable "gcp_service_account_key" { type = string, sensitive = true } ``` 2. **Main Configuration:** ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } provider "zenml" { server_url = var.zenml_server_url; api_key = var.zenml_api_key } provider "google" { project = var.project_id; region = var.region } resource "google_storage_bucket" "artifacts" { name = "${var.project_id}-zenml-artifacts-${var.environment}" location = var.region } resource "google_artifact_registry_repository" "containers" { location = var.region repository_id = "zenml-containers-${var.environment}" format = "DOCKER" } resource "zenml_service_connector" "gcp" { name = "gcp-${var.environment}" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = var.gcp_service_account_key } } # Register components (artifact store, container registry, orchestrator) similarly... ``` 3. **Outputs Configuration:** ```hcl output "stack_id" { value = zenml_stack.gcp_stack.id } output "stack_name" { value = zenml_stack.gcp_stack.name } ``` 4. **terraform.tfvars Configuration:** ```hcl zenml_server_url = "https://your-zenml-server.com" project_id = "your-gcp-project-id" region = "us-central1" environment = "dev" ``` 5. **Sensitive Variables:** ```bash export TF_VAR_zenml_api_key="your-zenml-api-key" export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ``` #### Usage Instructions 1. Initialize Terraform: ```bash terraform init ``` 2. Install ZenML integrations: ```bash zenml integration install gcp ``` 3. Review planned changes: ```bash terraform plan ``` 4. Apply configuration: ```bash terraform apply ``` 5. Set the new stack as active: ```bash zenml stack set $(terraform output -raw stack_name) ``` 6. Verify configuration: ```bash zenml stack describe ``` #### Best Practices - Use appropriate IAM roles and permissions. - Handle sensitive information securely. - Consider Terraform workspaces for environment management. - Regularly back up Terraform state files. - Version control Terraform configurations, excluding sensitive files. For more details, refer to the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/best-practices.md === # Summary: Best Practices for Using IaC with ZenML ## Overview This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform. It addresses challenges such as supporting multiple teams, maintaining security, and ensuring quick iterations. ## ZenML Approach ZenML utilizes stack components to abstract infrastructure resources, promoting a component-based architecture for reusability and consistency. ### Part 1: Stack Component Architecture - **Problem**: Different teams require varied ML infrastructure configurations. - **Solution**: Create reusable modules that correspond to ZenML stack components. **Base Infrastructure Example**: ```hcl # modules/zenml_stack_base/main.tf terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } resource "random_id" "suffix" { byte_length = 6 } module "base_infrastructure" { source = "./modules/base_infra" environment = var.environment project_id = var.project_id region = var.region resource_prefix = "zenml-${var.environment}-${random_id.suffix.hex}" } resource "zenml_service_connector" "base_connector" { name = "${var.environment}-base-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = module.base_infrastructure.service_account_key } } resource "zenml_stack" "base_stack" { name = "${var.environment}-base-stack" components = { artifact_store = zenml_stack_component.artifact_store.id container_registry = zenml_stack_component.container_registry.id orchestrator = zenml_stack_component.orchestrator.id } } ``` Teams can extend this base stack with additional components specific to their needs. ### Part 2: Environment Management and Authentication - **Problem**: Different environments require distinct configurations and authentication methods. - **Solution**: Use environment-specific configurations with adaptable service connectors. **Environment-Specific Connector Example**: ```hcl locals { env_config = { dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account" } prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account" } } } resource "zenml_service_connector" "env_connector" { name = "${var.environment}-connector" type = "gcp" auth_method = local.env_config[var.environment].auth_method configuration = local.env_config[var.environment].auth_configuration } ``` ### Part 3: Resource Sharing and Isolation - **Problem**: Ensuring data and security isolation for different ML projects. - **Solution**: Implement resource scoping with project isolation. **Project Isolation Example**: ```hcl locals { project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}" recommendation = "projects/recommendation/${var.environment}" } } resource "zenml_stack_component" "project_artifact_stores" { for_each = local.project_paths name = "${each.key}-artifact-store" type = "artifact_store" configuration = { path = "gs://${var.shared_bucket}/${each.value}" } } ``` ### Part 4: Advanced Stack Management Practices 1. **Stack Component Versioning**: ```hcl locals { stack_version = "1.2.0" } resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } ``` 2. **Service Connector Management**: ```hcl resource "zenml_service_connector" "env_connector" { name = "${var.environment}-${var.purpose}-connector" auth_method = var.environment == "prod" ? "workload-identity" : "service-account" } ``` 3. **Component Configuration Management**: ```hcl locals { base_configs = { orchestrator = { location = var.region, project = var.project_id } } env_configs = { dev = { orchestrator = { machine_type = "n1-standard-4" } }, prod = { orchestrator = { machine_type = "n1-standard-8" } } } } ``` 4. **Stack Organization and Dependencies**: ```hcl module "ml_stack" { source = "./modules/ml_stack" depends_on = [module.base_infrastructure, module.security] } ``` 5. **State Management**: ```hcl terraform { backend "gcs" { prefix = "terraform/state" } } ``` ## Conclusion Utilizing ZenML and Terraform for ML infrastructure allows for a flexible, maintainable, and secure environment. Following these best practices ensures a clean and scalable infrastructure codebase, promoting effective collaboration among ML teams. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/README.md === ### Integrate with Infrastructure as Code **Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section outlines how to integrate ZenML with popular IaC tools like [Terraform](https://www.terraform.io/). ![ZenML stack on Terraform Registry](../../../.gitbook/assets/terraform_providers_screenshot.png) Leverage IaC to effectively manage your ZenML stacks and components. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === ### Azure Service Connector Overview The ZenML Azure Service Connector enables authentication and access to Azure resources, including Blob storage, AKS Kubernetes clusters, and ACR container registries. It supports automatic credential configuration via the Azure CLI and handles specialized authentication for various Azure services. #### Key Commands - **List Azure Service Connector Types:** ```shell zenml service-connector list-types --type azure ``` #### Prerequisites - Install the Azure Service Connector: - For standalone: ```shell pip install "zenml[connectors-azure]" ``` - For full integration: ```shell zenml integration install azure ``` - Azure CLI setup is recommended for auto-configuration but not mandatory. #### Resource Types 1. **Generic Azure Resource:** Connects to any Azure service using generic azure-identity credentials. 2. **Azure Blob Storage Container:** Requires specific IAM permissions (e.g., `Storage Blob Data Contributor`). Resource names can be specified as: - URI: `{az|abfs}://{container-name}` - Name: `{container-name}` 3. **AKS Kubernetes Cluster:** Requires permissions to list AKS clusters. Resource names can be specified as: - `[{resource-group}/]{cluster-name}` 4. **ACR Container Registry:** Requires permissions to pull and push images. Resource names can be specified as: - URI: `[https://]{registry-name}.azurecr.io` - Name: `{registry-name}` #### Authentication Methods - **Implicit Authentication:** Uses environment variables or Azure CLI credentials but is disabled by default due to security risks. - **Service Principal:** Requires an Azure service principal with a client ID and secret. ```shell zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` - **Access Token:** Uses temporary tokens but is limited for long-term use and does not support Azure Blob storage. #### Auto-Configuration - Automatically fetches credentials from the Azure CLI but is limited to temporary access tokens and does not support Azure Blob storage. #### Local Client Provisioning - Local Azure CLI, Kubernetes `kubectl`, and Docker CLI can be configured with credentials from the Azure Service Connector. #### Example Workflow 1. **Register a Service Connector:** ```shell zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` 2. **Connect Azure Blob Storage:** ```shell zenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore zenml artifact-store connect azure-demo --connector azure-service-principal ``` 3. **Connect AKS Cluster:** ```shell zenml orchestrator register aks-demo-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads zenml orchestrator connect aks-demo-cluster --connector azure-service-principal ``` 4. **Connect ACR:** ```shell zenml container-registry register acr-demo-registry --flavor azure --uri=demozenmlcontainerregistry.azurecr.io zenml container-registry connect acr-demo-registry --connector azure-service-principal ``` 5. **Run a Pipeline:** ```python from zenml import pipeline, step @step def step_1() -> str: return "world" @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> None: print(f"{input_one} {input_two}") @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) if __name__ == "__main__": my_pipeline() ``` This summary provides a concise overview of the Azure Service Connector's functionality, resource types, authentication methods, and example usage in ZenML. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/service-connectors-guide.md === # Service Connectors Guide Summary This documentation provides a comprehensive guide to managing Service Connectors in ZenML, enabling connections to external resources. Key sections include: ## Overview - **Terminology**: Understand essential terms related to Service Connectors, including types, resource types, and resource names. - **Service Connector Types**: Different implementations exist for various cloud providers (AWS, GCP, Azure) and local resources. Each type supports specific authentication methods and resource types. ## Key Commands - **List Service Connector Types**: ```sh zenml service-connector list-types ``` - **Describe a Service Connector Type**: ```sh zenml service-connector describe-type ``` ## Resource Types - Organizes resources into logical classes based on access protocols or vendors, e.g., `kubernetes-cluster` for all Kubernetes clusters. ## Service Connector Registration - **Registering a Service Connector**: ```sh zenml service-connector register --type --auto-configure ``` - Supports multi-type (multiple resource types) and multi-instance (multiple resources of the same type) configurations. ## Verification - Verify the configuration and credentials of a Service Connector: ```sh zenml service-connector verify ``` ## Connecting Stack Components - Connect Stack Components to external resources using registered Service Connectors: ```sh zenml artifact-store connect --connector ``` ## Auto-configuration - Automatically discover and extract credentials from local environments, using existing CLI configurations (e.g., AWS CLI, GCP SDK). ## Resource Discovery - List accessible resources through Service Connectors: ```sh zenml service-connector list-resources ``` ## Local Client Configuration - Configure local CLI tools (like `kubectl` or Docker) with credentials from Service Connectors: ```sh zenml service-connector login --resource-type --resource-id ``` ## End-to-End Examples - For practical applications, refer to specific examples for AWS, GCP, and Azure Service Connectors. This guide serves as a foundational resource for effectively utilizing Service Connectors in ZenML, ensuring secure and efficient connections to external resources. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/kubernetes-service-connector.md === # Kubernetes Service Connector Documentation Summary The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to generic clusters via pre-authenticated Kubernetes Python clients and local `kubectl` configuration. ## Prerequisites - Install the connector using: - `pip install "zenml[connectors-kubernetes]"` for prerequisites only. - `zenml integration install kubernetes` for the full integration. - Local `kubectl` configuration is not required for accessing clusters. ## Resource Types - Supports generic Kubernetes clusters identified by the `kubernetes-cluster` resource type. ## Authentication Methods 1. Username and password (not recommended for production). 2. Authentication token (with or without client certificates). For local K3D clusters, an empty token can be used. **Warning**: The Service Connector does not generate short-lived credentials; configured credentials are directly used for authentication. API tokens with client certificates are recommended. ## Auto-configuration Fetch credentials from the local `kubectl` during registration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure ``` Example output: ``` Successfully registered service connector `kube-auto` with access to resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE │ RESOURCE NAMES ┃ ┠───────────────────────┼────────────────┨ ┃ 🌀 kubernetes-cluster │ 35.185.95.223 ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` ## Describe Service Connector To view details: ```sh zenml service-connector describe kube-auto ``` Example output includes properties like ID, name, auth method, resource types, and configuration details. **Info**: Credentials auto-discovered may have limited lifetime, especially with third-party providers like GCP or AWS. ## Local Client Provisioning Configure the local Kubernetes client using: ```sh zenml service-connector login kube-auto ``` Example output indicates successful configuration of the local client. ## Stack Components Use The Kubernetes Service Connector can be utilized in Orchestrator and Model Deployer stack components, managing Kubernetes workloads without explicit `kubectl` configurations in the environment. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md === ### HyperAI Service Connector Overview The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. #### Command to List Connector Types ```shell $ zenml service-connector list-types --type hyperai ``` #### Connector Details | Name | Type | Resource Types | Auth Methods | Local | Remote | |--------------------------|-----------|---------------------|----------------------|-------|--------| | HyperAI Service Connector | 🤖 hyperai | 🤖 hyperai-instance | rsa-key, dsa-key, ecdsa-key, ed25519-key | ✅ | ✅ | ### Prerequisites To use the HyperAI Service Connector, install the HyperAI integration: ```shell $ zenml integration install hyperai ``` ### Resource Types - Supports HyperAI instances. ### Authentication Methods ZenML establishes an SSH connection to the HyperAI instance, supporting the following authentication methods: 1. RSA key 2. DSA (DSS) key 3. ECDSA key 4. ED25519 key **Warning:** SSH private keys are long-lived credentials and provide unrestricted access to HyperAI instances. They will be distributed to all clients using them. ### Configuration Requirements When configuring the Service Connector, provide: - At least one `hostname` - `username` for login - Optionally, an `ssh_passphrase` ### Usage Options 1. Create a service connector per HyperAI instance with different SSH keys. 2. Use a single SSH key for multiple instances, selecting the instance when creating the HyperAI orchestrator component. ### Auto-configuration This Service Connector does not support auto-discovery of authentication credentials. Feedback on this feature can be provided via [Slack](https://zenml.io/slack) or by creating an issue on [GitHub](https://github.com/zenml-io/zenml/issues). ### Stack Components Usage The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service-connector.md === ### Docker Service Connector Overview The ZenML Docker Service Connector facilitates authentication with Docker/OCI container registries and manages Docker clients. It provides pre-authenticated `python-docker` clients to linked Stack Components. #### Command to List Connector Types ```shell zenml service-connector list-types --type docker ``` #### Supported Resource Types - **Resource Type**: `docker-registry` - **Registry Formats**: - DockerHub: `docker.io` or `https://index.docker.io/v1/` - OCI registry: `https://host:port/` #### Authentication Methods - **Methods**: Username/password or access token (API tokens recommended). - **Command to Register DockerHub**: ```sh zenml service-connector register dockerhub --type docker -in ``` #### Example Registration Output Prompts for: - Service connector name - Description - Username and password/token - Registry URL (optional) #### Important Notes - **Credential Handling**: Configured credentials are distributed directly to clients; short-lived credentials are not supported. - **Auto-configuration**: No auto-discovery of credentials from local Docker clients. #### Local Client Provisioning To configure the local Docker client: ```sh zenml service-connector login dockerhub ``` **Warning**: Password stored unencrypted in `~/.docker/config.json`. Use a credential helper for security. #### Stack Components Usage The connector allows Container Registry stack components to authenticate with remote registries, enabling image building and publishing without explicit Docker credentials in the environment. #### Future Enhancements - Automatic configuration of Docker credentials in container runtimes (e.g., Kubernetes) is planned for future releases. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/gcp-service-connector.md === ### GCP Service Connector Overview The ZenML GCP Service Connector enables authentication and access to various GCP resources, including GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods: GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication. By default, it issues short-lived OAuth 2.0 tokens for security. #### Key Features: - **Resource Types**: - **Generic GCP Resource**: Connects to any GCP service using OAuth 2.0 tokens. - **GCS Bucket**: Requires specific permissions (e.g., `storage.buckets.list`, `storage.objects.create`). - **GKE Cluster**: Requires permissions like `container.clusters.list`. - **GAR/GCR Registry**: Supports both Google Artifact Registry and legacy GCR. #### Authentication Methods: 1. **Implicit Authentication**: Uses Application Default Credentials (ADC) without explicit configuration. 2. **GCP User Account**: Generates temporary OAuth tokens from user credentials. 3. **GCP Service Account**: Uses a service account key JSON for authentication. 4. **Service Account Impersonation**: Generates temporary credentials by impersonating another service account. 5. **External Account (Workload Identity)**: Authenticates using AWS IAM or Azure AD credentials. 6. **OAuth 2.0 Token**: Requires manual token management. ### Prerequisites - Install the GCP Service Connector: ```bash pip install "zenml[connectors-gcp]" ``` - Optionally, install the entire GCP integration: ```bash zenml integration install gcp ``` ### Command Examples - **List Service Connector Types**: ```bash zenml service-connector list-types --type gcp ``` - **Register a GCP Service Connector**: ```bash zenml service-connector register gcp-implicit --type gcp --auth-method implicit --auto-configure ``` - **Describe a Service Connector**: ```bash zenml service-connector describe gcp-implicit ``` ### Local Client Configuration The local `gcloud`, `kubectl`, and Docker CLIs can be configured using credentials from the GCP Service Connector. This allows seamless access to GCP resources without manual credential management. ### Stack Components Integration The GCP Service Connector can connect various Stack Components, such as: - **GCS Artifact Store**: Connects to GCS buckets. - **Kubernetes Orchestrator**: Connects to GKE clusters. - **Container Registry**: Connects to GAR or GCR. ### End-to-End Examples 1. **Multi-Type GCP Service Connector**: - Register a connector that accesses GCS, GKE, and GCR. - Connect Stack Components to the registered connector. 2. **Single-Instance GCP Service Connectors**: - Register individual connectors for GCS, GCR, and Vertex AI. - Connect each Stack Component to its respective connector. ### Conclusion The ZenML GCP Service Connector streamlines the process of connecting to GCP resources, enhancing security through short-lived credentials and simplifying local client configurations. It supports a variety of authentication methods and integrates seamlessly with ZenML Stack Components for efficient resource management. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/best-security-practices.md === ### Best Practices for Authentication Methods in Service Connectors Service Connectors provide various authentication methods suitable for cloud platforms. While no unified standard exists, identifiable patterns can guide the selection of appropriate authentication methods. #### Username and Password - **Avoid using primary account passwords** as credentials. Use alternatives like session tokens, API keys, or API tokens whenever possible. - This method is the least secure and should not be shared or used for automated workloads. Cloud platforms typically require exchanging account/password credentials for long-lived credentials. #### Implicit Authentication - Provides immediate access to cloud resources without configuration but may limit portability. - **Security Risk**: Users may gain access to resources configured for the ZenML Server. Implicit authentication is disabled by default and must be explicitly enabled. - It utilizes locally stored credentials, environment variables, or cloud workload credentials (e.g., AWS EC2 instance metadata, GCP service accounts, Azure Managed Identity). **Example Command for GCP Implicit Authentication**: ```sh zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core ``` #### Long-Lived Credentials (API Keys, Account Keys) - Ideal for production use, especially when paired with mechanisms for generating short-lived tokens or impersonating accounts. - Long-lived credentials are exchanged for temporary tokens to minimize exposure. **Common Long-Lived Credential Commands**: - AWS: `aws configure` - GCP: `gcloud auth application-default login` - Azure: `az login` #### Generating Temporary and Down-Scoped Credentials - **Temporary Credentials**: Issued from long-lived credentials, providing limited-time access. - **Down-Scoped Credentials**: Restrict permissions to the minimum required for specific resources. **Example of Temporary Credentials**: ```sh zenml service-connector describe eks-zenhacks-cluster --client ``` #### Impersonating Accounts and Assuming Roles - Requires setup of multiple permission-bearing accounts and roles. - Allows for secure access without exposing long-lived credentials. **Example Command for GCP Account Impersonation**: ```sh zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl ``` #### Short-Lived Credentials - Temporary credentials configured in Service Connectors, becoming unusable upon expiration. - Useful for granting temporary access without exposing long-lived credentials. **Example Command for Short-Lived Credentials**: ```sh AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token ``` ### Summary When configuring Service Connectors, prioritize security by avoiding primary account passwords, utilizing long-lived credentials, and implementing mechanisms for temporary and down-scoped credentials. Consider the implications of each authentication method on portability and security, especially in production environments. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === ### Summary of AWS Service Connector Documentation The **ZenML AWS Service Connector** enables connection to AWS resources like S3, EKS, and ECR, facilitating authentication and access management. It supports various authentication methods, including long-lived AWS secret keys, IAM roles, STS tokens, and implicit authentication. The connector can generate temporary STS tokens with minimal permissions and automatically configure credentials from the AWS CLI. #### Key Features: - **Resource Types**: - **Generic AWS Resource**: Connects to any AWS service using a pre-configured boto3 session. - **S3 Bucket**: Requires specific IAM permissions (e.g., `s3:ListBucket`, `s3:GetObject`). - **EKS Cluster**: Requires permissions like `eks:ListClusters` and must be added to the `aws-auth` ConfigMap for access. - **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories` and `ecr:GetAuthorizationToken`. - **Authentication Methods**: - **Implicit Authentication**: Uses environment variables or local configuration files. Requires explicit enabling due to security risks. - **AWS Secret Key**: Long-lived credentials, suitable for development but not recommended for production. - **STS Token**: Temporary tokens that require regular updates. - **IAM Role**: Generates temporary STS tokens by assuming a role, recommended for security. - **Session Token**: Generates temporary session tokens with longer expiration. - **Federation Token**: Similar to session tokens but for federated users. #### Auto-Configuration: The connector can auto-discover credentials from the AWS CLI, allowing seamless integration without manual credential management. #### Local Client Provisioning: The connector can configure local AWS CLI, Kubernetes, and Docker clients with short-lived credentials, enhancing security. #### Stack Component Integration: The AWS Service Connector can link various ZenML Stack Components to AWS resources, allowing for streamlined workflows without manual configuration of AWS or Kubernetes contexts. #### Example Workflow: 1. **Configure AWS CLI** with valid credentials. 2. **Register a multi-type AWS Service Connector** using auto-configuration. 3. **List accessible resources** (S3 buckets, EKS clusters, ECR registries). 4. **Register and connect Stack Components** (S3 Artifact Store, Kubernetes Orchestrator, ECR). 5. **Run a simple pipeline** to validate the setup. #### Example Commands: - List available service connector types: ```shell zenml service-connector list-types --type aws ``` - Register a service connector: ```shell AWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure ``` - Verify access to resources: ```shell zenml service-connector verify aws-demo-multi --resource-type s3-bucket ``` This summary encapsulates the essential technical details and functionalities of the ZenML AWS Service Connector, enabling efficient integration with AWS resources while maintaining security and ease of use. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === ### ZenML Service Connectors Overview ZenML enables seamless connections between your MLOps platform and various cloud providers and infrastructure services (e.g., AWS, GCP, Azure, Kubernetes). This documentation outlines how to utilize ZenML Service Connectors to manage authentication and authorization, simplifying access to external resources while adhering to security best practices. #### Key Concepts - **Service Connectors**: Abstract the complexity of authentication and authorization, allowing secure access to infrastructure resources without embedding sensitive credentials directly in code. - **Authentication Methods**: Include options like AWS IAM roles, session tokens, and more, enabling flexible and secure connections to services. #### Use Case: Connecting to AWS S3 1. **Listing Service Connector Types**: To see available Service Connector types, use: ```sh zenml service-connector list-types ``` 2. **Registering a Service Connector**: To connect to an AWS S3 bucket using auto-configuration: ```sh zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket ``` 3. **Creating and Connecting an Artifact Store**: Register an S3 Artifact Store and connect it to the AWS Service Connector: ```sh zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-s3 ``` 4. **Example Pipeline**: A simple pipeline can be defined as follows: ```python from zenml import step, pipeline @step def simple_step_one() -> str: return "Hello World!" @step def simple_step_two(msg: str) -> None: print(msg) @pipeline def simple_pipeline() -> None: message = simple_step_one() simple_step_two(msg=message) if __name__ == "__main__": simple_pipeline() ``` 5. **Running the Pipeline**: Execute the pipeline with: ```sh python run.py ``` #### Advantages of Using Service Connectors - **Security**: Credentials are managed centrally, reducing the risk of exposure. - **Flexibility**: Multiple Stack Components can utilize the same Service Connector. - **Ease of Use**: Users can connect to resources without needing deep knowledge of authentication mechanisms. #### Additional Resources - [Service Connector Guide](./service-connectors-guide.md) - [Security Best Practices](./best-security-practices.md) - [AWS Service Connector](./aws-service-connector.md) - [GCP Service Connector](./gcp-service-connector.md) - [Azure Service Connector](./azure-service-connector.md) - [Docker Service Connector](./docker-service-connector.md) - [Kubernetes Service Connector](./kubernetes-service-connector.md) This concise overview provides essential information about connecting ZenML to external resources using Service Connectors, emphasizing security and usability. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/reference-secrets-in-stack-configuration.md === ### Summary: Referencing Secrets in Stack Configuration In ZenML, sensitive information like passwords or tokens can be securely referenced in stack components using secret references. Instead of directly specifying sensitive values, use the syntax `{{.}}` to reference secrets. #### Example: Registering and Using Secrets 1. **Register a Secret**: ```shell zenml secret create mlflow_secret \ --username=admin \ --password=abc123 ``` 2. **Reference the Secret in a Component**: ```shell zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` #### Secret Validation ZenML validates the existence of secrets and their keys before running a pipeline to prevent runtime failures. The validation behavior can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. - `SECRET_AND_KEY_EXISTS`: (Default) Validates both the existence of secrets and their key-value pairs. #### Fetching Secret Values in Steps When using centralized secrets management, secrets can be accessed within steps via the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ### Additional Resources - For more on managing secrets, refer to the [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md) documentation. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === ### Export Stack Requirements To obtain the `pip` requirements for a specific stack, use the following CLI command: ```bash zenml stack export-requirements --output-file stack_requirements.txt pip install -r stack_requirements.txt ``` This command exports the requirements to a file named `stack_requirements.txt`, which can then be used to install the necessary packages. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md === # Custom Stack Component Flavor in ZenML ## Overview ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide outlines the process of developing and using custom flavors, emphasizing modularity and reusability. ## Component Flavors - **Component Type**: A broad category defining stack component functionality (e.g., `artifact_store`). - **Flavor**: Specific implementations of a component type (e.g., `local`, `s3`). ### Core Abstractions 1. **StackComponent**: Defines core functionality. Example: ```python from zenml.stack import StackComponent class BaseArtifactStore(StackComponent): @abstractmethod def open(self, path, mode="r"): pass @abstractmethod def exists(self, path): pass ``` 2. **StackComponentConfig**: Configures stack component instances, separating static (`config`) and dynamic (`settings`) configurations. ```python from zenml.stack import StackComponentConfig class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] ``` 3. **Flavor**: Combines `StackComponent` and `StackComponentConfig`, defining the flavor's name and type. ```python from zenml.enums import StackComponentType from zenml.stack import Flavor class LocalArtifactStoreFlavor(Flavor): @property def name(self) -> str: return "local" @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @property def config_class(self) -> Type[LocalArtifactStoreConfig]: return LocalArtifactStoreConfig @property def implementation_class(self) -> Type[LocalArtifactStore]: return LocalArtifactStore ``` ## Implementing a Custom Flavor ### Configuration Class Define the configuration class with required fields: ```python from zenml.artifact_stores import BaseArtifactStoreConfig from zenml.utils.secret_utils import SecretField class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) secret: Optional[str] = SecretField(default=None) # Additional fields... ``` ### Implementation Class Implement the abstract methods: ```python import s3fs from zenml.artifact_stores import BaseArtifactStore class MyS3ArtifactStore(BaseArtifactStore): _filesystem: Optional[s3fs.S3FileSystem] = None @property def filesystem(self) -> s3fs.S3FileSystem: if not self._filesystem: self._filesystem = s3fs.S3FileSystem( key=self.config.key, secret=self.config.secret, # Additional kwargs... ) return self._filesystem def open(self, path, mode="r"): return self.filesystem.open(path=path, mode=mode) def exists(self, path): return self.filesystem.exists(path=path) ``` ### Flavor Class Combine the configuration and implementation classes: ```python from zenml.artifact_stores import BaseArtifactStoreFlavor class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): @property def name(self): return 'my_s3_artifact_store' @property def implementation_class(self): return MyS3ArtifactStore @property def config_class(self): return MyS3ArtifactStoreConfig ``` ## Registering the Flavor Use the ZenML CLI to register the flavor: ```shell zenml artifact-store flavor register ``` Example: ```shell zenml artifact-store flavor register flavors.my_flavor.MyS3ArtifactStoreFlavor ``` ## Usage After registration, use the custom flavor in your stacks: ```shell zenml artifact-store register --flavor=my_s3_artifact_store --path='some-path' zenml stack register --artifact-store ``` ## Best Practices - Execute `zenml init` consistently at the repository root. - Test flavors thoroughly before production use. - Maintain clean, well-documented code. - Use existing flavors as references for new implementations. ## Additional Resources For specific stack component types, refer to the respective documentation links for further guidance on building custom flavors. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === ### Summary: Deploying a Cloud Stack with Terraform ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks, enhancing machine learning infrastructure deployment. #### Prerequisites - A deployed ZenML server instance accessible from your cloud provider (not a local server). - Create a service account and API key for programmatic access to the ZenML server using: ```shell zenml service-account create ``` - Install Terraform (version 1.9 or higher) on your machine. - Authenticate with your cloud provider using their CLI or SDK. #### Using Terraform Modules 1. Set up ZenML provider with environment variables: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="" ``` 2. Create a Terraform configuration file (`main.tf`): ```hcl terraform { required_providers { aws = { source = "hashicorp/aws" } zenml = { source = "zenml-io/zenml" } } } provider "zenml" {} module "zenml_stack" { source = "zenml-io/zenml-stack/" zenml_stack_name = "" orchestrator = "" } output "zenml_stack_id" { value = module.zenml_stack.zenml_stack_id } output "zenml_stack_name" { value = module.zenml_stack.zenml_stack_name } ``` 3. Run Terraform commands: ```shell terraform init terraform apply ``` 4. Confirm changes and provision resources. After completion, you will see the ZenML stack ID and name. 5. To use the stack: ```shell zenml integration install zenml stack set ``` #### Cloud Provider Specifics - **AWS**: Requires AWS CLI configured with `aws configure`. The Terraform module creates S3, ECR, and orchestrator resources. - **GCP**: Requires `gcloud` CLI configured with `gcloud init`. The module provisions GCS, Google Artifact Registry, and orchestrator resources. - **Azure**: Requires Azure CLI configured with `az login`. The module sets up Azure Storage, ACR, and orchestrator resources. #### Cleanup To remove all resources provisioned by Terraform, run: ```shell terraform destroy ``` This command will also delete the registered ZenML stack from your server. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md === ### Summary: Deploy a Cloud Stack with ZenML ZenML provides a feature to **deploy a cloud stack with a single click**, simplifying the process of configuring infrastructure. This is particularly useful in remote settings where manual deployment can be complex and time-consuming. #### Prerequisites - A deployed instance of ZenML (not local). - For setup instructions, refer to the [ZenML deployment guide](../../../getting-started/deploying-zenml/README.md). #### Deployment Methods You can deploy via the **Dashboard** or **CLI**. ##### Dashboard Deployment 1. Navigate to the stacks page and click "+ New Stack". 2. Select "New Infrastructure". 3. Choose your cloud provider (AWS, GCP, Azure) and configure the stack: - **AWS**: Select region and stack name. Deploy via AWS CloudFormation. - **GCP**: Select region and stack name. Deploy using GCP Cloud Shell and Deployment Manager. - **Azure**: Select location and stack name. Use Azure Cloud Shell with Terraform. ##### CLI Deployment Use the command: ```shell zenml stack deploy -p {aws|gcp|azure} ``` - Follow prompts to complete the deployment for the selected provider. #### Infrastructure Overview Each cloud provider will deploy specific resources: - **AWS**: - S3 bucket (Artifact Store) - ECR (Container Registry) - IAM roles and permissions for SageMaker and CloudBuild. - **GCP**: - GCS bucket (Artifact Store) - GCP Artifact Registry (Container Registry) - Service Account with permissions for Vertex AI and Cloud Build. - **Azure**: - Resource Group containing: - Azure Storage Account (Artifact Store) - Azure Container Registry - AzureML Workspace (Orchestrator and Step Operator) - Service Principal with necessary permissions. #### Permissions Each deployment grants specific permissions to ZenML for resource access, ensuring proper functionality and security. With this streamlined process, users can easily deploy a cloud stack and start running pipelines in a remote environment. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === ### Managing Stacks & Components #### What is a Stack? A **stack** in ZenML represents the configuration of infrastructure and tooling for executing pipelines. It consists of various components, each serving a specific function, such as: - **Container Registry**: For image storage. - **Kubernetes Cluster**: As an orchestrator. - **Artifact Store**: For storing artifacts. - **Experiment Tracker**: Like MLflow for tracking experiments. #### Organizing Execution Environments ZenML allows running pipelines across multiple stacks, facilitating testing in different environments (local, staging, production). This separation helps: - Prevent accidental deployments to production. - Reduce costs by using less powerful resources in staging. - Control access by assigning permissions to specific stacks. #### Managing Credentials Most stack components require credentials for infrastructure interaction. ZenML recommends using **Service Connectors** to manage these credentials securely. Key benefits include: - **Reduced Credential Leakage**: Limiting access minimizes risk. - **Instant Revocation**: Quick response to compromised credentials. - **Easier Auditing**: Clear separation of roles simplifies tracking. **Recommended Workflow**: 1. Limit Service Connector creation to a few trusted individuals. 2. Create a connector for development/staging and allow data scientists to use it. 3. Create a separate connector for production to ensure safe resource usage. #### Deploying and Managing Stacks Deploying MLOps stacks can be complex due to: - Specific tool requirements (e.g., Kubeflow needs a Kubernetes cluster). - Difficulty in setting reasonable defaults for infrastructure parameters. - Potential installation issues (e.g., custom service accounts for Vertex AI). - Need for proper permissions between components. - Challenges in resource cleanup post-experimentation. ZenML documentation provides guidance for provisioning, configuring, and extending stacks. Key topics include: - Deploying a cloud stack. - Registering a cloud stack. - Deploying with Terraform. - Exporting and installing stack requirements. - Referencing secrets in stack configurations. - Implementing custom stack components. For further details, refer to the specific documentation links provided. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === ### ZenML Stack Wizard Overview The **stack** in ZenML represents your infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex, especially in remote environments. The **stack wizard** simplifies this by allowing users to register a ZenML cloud stack using existing infrastructure. #### Deployment Options - If infrastructure is not deployed, use the **1-click deployment tool** or **Terraform modules** for more control over resource provisioning. ### Using the Stack Wizard The stack wizard is accessible via both the **CLI** and the **dashboard**. #### Dashboard Steps 1. Navigate to the stacks page and click **"+ New Stack"**. 2. Select **"Use existing Cloud"** and choose your cloud provider. 3. Select an authentication method and fill in the required fields. ##### Authentication Methods - **AWS**: Options include AWS Secret Key, STS Token, IAM Role, Session Token, and Federation Token. - **GCP**: Options include User Account, Service Account, External Account, OAuth 2.0 Token, and Service Account Impersonation. - **Azure**: Options include Service Principal and Access Token. After authentication, ZenML will display available resources to create stack components (artifact store, orchestrator, container registry). #### CLI Command To register a remote stack, use: ```shell zenml stack register -p {aws|gcp|azure} ``` You can specify an existing **service connector** with `-sc `. ### Service Connector Configuration The wizard checks for automatic credential acquisition from the local environment. If found, you can choose to use them or configure manually. If declined, you can select from existing service connectors or create a new one. ### Defining Cloud Components You will define three essential components: - **Artifact Store** - **Orchestrator** - **Container Registry** For each component, you can either reuse existing components or create new ones from available resources. ### Conclusion The stack wizard streamlines the process of registering a cloud stack, enabling users to run pipelines in a remote setting efficiently. ================================================== === File: docs/book/how-to/control-logging/enable-or-disable-logs-storing.md === # ZenML Logging Configuration ZenML captures logs during step execution using a logging handler. Users can utilize the Python logging module or print statements, which ZenML will store in the artifact store. ## Example Code ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") print("World.") ``` Logs can be viewed on the dashboard. Note: If not connected to a cloud artifact store with a service connector, logs will not be displayed. For more details, refer to the [dashboard logs documentation](./view-logs-on-the-dasbhoard.md). ## Disabling Log Storage To disable log storage, you can: 1. Use the `enable_step_logs` parameter in the `@step` or `@pipeline` decorator: ```python from zenml import pipeline, step @step(enable_step_logs=False) def my_step() -> None: ... @pipeline(enable_step_logs=False) def my_pipeline(): ... ``` 2. Set the environmental variable `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment. This variable takes precedence over the decorator parameters: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() ``` Alternatively, configure the pipeline options: ```python my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This configuration allows users to control log storage effectively within ZenML. ================================================== === File: docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md === # Viewing Logs on the Dashboard ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will capture and store. ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") # Use the logging module. print("World.") # Use print statements as well. ``` Logs are stored in the artifact store of your stack and can be viewed on the dashboard if the ZenML server has access to the artifact store. This is true in two scenarios: 1. **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. 2. **Deployed ZenML Server**: Logs from runs on a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. Refer to the production guide for configuration details. If configured correctly, logs are displayed on the dashboard. **Note**: To disable log storage for performance or storage reasons, follow the provided instructions. ================================================== === File: docs/book/how-to/control-logging/set-logging-format.md === ### Summary: Setting the Logging Format in ZenML To change the default logging format in ZenML, use the following environment variable: ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` The logging format must adhere to the `%`-string formatting style. For available attributes, refer to the [Python logging documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes). **Important Note:** Setting this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To configure logging for remote runs, set the `ZENML_LOGGING_FORMAT` in the pipeline's environment as shown below: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_FORMAT": "%(asctime)s %(message)s"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This setup ensures that the logging format is applied consistently across both local and remote pipeline executions. ================================================== === File: docs/book/how-to/control-logging/disable-rich-traceback.md === ### Disabling Rich Traceback Output in ZenML ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output during pipeline debugging. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` This change will only affect local runs. To disable rich tracebacks for remote pipeline runs, set the variable in the pipeline's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` This ensures that plain text tracebacks are displayed for both local and remote runs. ================================================== === File: docs/book/how-to/control-logging/disable-colorful-logging.md === ### Summary: Disabling Colorful Logging in ZenML ZenML enables colorful logging by default for better readability. To disable this feature, set the environment variable: ```bash ZENML_LOGGING_COLORS_DISABLED=true ``` Setting this variable in the client environment (e.g., local machine) will disable colorful logging for remote pipeline runs as well. To disable it locally while keeping it enabled for remote runs, configure the variable in the pipeline environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` This allows for flexible logging configurations based on the execution environment. ================================================== === File: docs/book/how-to/control-logging/set-logging-verbosity.md === ### Setting Logging Verbosity in ZenML ZenML defaults to `INFO` logging verbosity. To change this, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` Available levels are `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To set logging verbosity for remote runs, configure the environment variable in the pipeline's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` This ensures the specified logging level is applied during remote pipeline execution. ================================================== === File: docs/book/how-to/control-logging/README.md === # Configuring ZenML's Default Logging Behavior ZenML generates different types of logs across various environments: - **ZenML Server**: Produces server logs similar to any FastAPI server. - **Client or Runner Environment**: Logs generated during pipeline execution, including pre- and post-run steps. - **Execution Environment**: Logs created at the orchestrator level during the execution of each pipeline step, typically using Python's `logging` module. This section outlines how users can manage logging behavior across these environments. ================================================== === File: docs/book/how-to/data-artifact-management/README.md === # Data and Artifact Management in ZenML This section outlines the management of data and artifacts within ZenML, focusing on key functionalities and processes. ## Key Concepts - **Data Management**: Involves handling datasets used in machine learning workflows, including versioning, storage, and retrieval. - **Artifact Management**: Concerns the management of outputs generated during the pipeline execution, such as models, metrics, and visualizations. ## Important Features 1. **Versioning**: Track different versions of datasets and artifacts to ensure reproducibility and traceability. 2. **Storage**: Utilize various backends (e.g., local, cloud) for efficient storage and retrieval of data and artifacts. 3. **Integration**: Seamlessly integrate with existing tools and platforms for enhanced data handling capabilities. ## Code Snippet Example ```python from zenml import pipeline @pipeline def my_pipeline(): data = load_data() processed_data = preprocess(data) model = train_model(processed_data) save_artifact(model) # Execute the pipeline my_pipeline() ``` This example demonstrates a simple pipeline that loads data, processes it, trains a model, and saves the resulting artifact. ## Conclusion Effective data and artifact management in ZenML is crucial for maintaining the integrity and reproducibility of machine learning workflows. Key functionalities include versioning, storage, and integration with other tools. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/disabling-visualizations.md === ### Disabling Visualizations To disable artifact visualization, set `enable_artifact_visualization` to `False` at the pipeline or step level: ```python @step(enable_artifact_visualization=False) def my_step(): ... @pipeline(enable_artifact_visualization=False) def my_pipeline(): ... ``` This configuration prevents visualizations from being generated for the specified step or pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/creating-custom-visualizations.md === ### Creating Custom Visualizations in ZenML ZenML allows you to associate custom visualizations with artifacts using supported types: - **HTML**: Embedded HTML visualizations (e.g., data validation reports) - **Image**: Visualizations of image data (e.g., Pillow images) - **CSV**: Tables (e.g., pandas DataFrame output) - **Markdown**: Markdown strings or pages - **JSON**: JSON strings or objects #### Methods to Add Custom Visualizations: 1. **Special Return Types**: Cast existing HTML, Markdown, CSV, or JSON data to specific types in your step. 2. **Custom Materializers**: Define visualization logic for specific data types. 3. **Custom Return Type Class**: Create a class with a corresponding materializer for any custom visualizations. ### Visualization via Special Return Types To visualize existing data, cast it to the following types: - `zenml.types.HTMLString` - `zenml.types.MarkdownString` - `zenml.types.CSVString` - `zenml.types.JSONString` **Example: CSV Visualization** ```python from zenml.types import CSVString @step def my_step() -> CSVString: return CSVString("a,b,c\n1,2,3") ``` **Example: Matplotlib Visualization** ```python import matplotlib.pyplot as plt import base64 import io from zenml.types import HTMLString from zenml import step, pipeline @step def create_matplotlib_visualization() -> HTMLString: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') buf = io.BytesIO() fig.savefig(buf, format='png', bbox_inches='tight', dpi=300) plt.close(fig) image_base64 = base64.b64encode(buf.getvalue()).decode('utf-8') html = f'
' return HTMLString(html) @pipeline def visualization_pipeline(): create_matplotlib_visualization() if __name__ == "__main__": visualization_pipeline() ``` ### Visualization via Materializers To visualize all artifacts of a certain type, override the `save_visualizations()` method in a custom materializer. **Example: Matplotlib Figure Visualization** 1. **Custom Class**: ```python from pydantic import BaseModel class MatplotlibVisualization(BaseModel): figure: Any ``` 2. **Materializer**: ```python class MatplotlibMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MatplotlibVisualization,) def save_visualizations(self, data: MatplotlibVisualization) -> Dict[str, VisualizationType]: visualization_path = os.path.join(self.uri, "visualization.png") with fileio.open(visualization_path, 'wb') as f: data.figure.savefig(f, format='png', bbox_inches='tight') return {visualization_path: VisualizationType.IMAGE} ``` 3. **Step**: ```python @step def create_matplotlib_visualization() -> MatplotlibVisualization: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') return MatplotlibVisualization(figure=fig) ``` **Process Overview**: - The step creates a `MatplotlibVisualization`. - ZenML calls `MatplotlibMaterializer` to save the figure as a PNG. - The dashboard displays the PNG when viewing the artifact. For further examples, refer to the Hugging Face datasets materializer. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/types-of-visualizations.md === ### Types of Visualizations in ZenML ZenML automatically saves and displays visualizations for various data types in the ZenML dashboard. These visualizations can also be viewed in Jupyter notebooks using the `artifact.visualize()` method. **Examples of Default Visualizations:** - **Pandas DataFrame**: Statistical representation as a PNG image. - **Drift Detection Reports**: Generated by tools like [Evidently](../../../component-guide/data-validators/evidently.md), [Great Expectations](../../../component-guide/data-validators/great-expectations.md), and [whylogs](../../../component-guide/data-validators/whylogs.md). - **Hugging Face Datasets Viewer**: Displayed as an HTML iframe. Visualizations enhance data analysis and monitoring within ZenML workflows. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === ### Displaying Visualizations in the Dashboard #### Accessing Visualizations To display visualizations on the ZenML dashboard, the ZenML server must have access to the artifact store where visualizations are stored. #### Configuring a Service Connector - Visualizations are typically stored in the artifact store. Users must configure a **service connector** to allow the ZenML server to access this store. - For detailed instructions, refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md) and the [AWS S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). **Note:** When using the default/local artifact store with a deployed ZenML, the server lacks access to local files, resulting in visualizations not being displayed. A remote artifact store with an enabled service connector is required to view visualizations. #### Configuring Artifact Stores If visualizations from a pipeline run are missing, ensure the ZenML server has the necessary dependencies and permissions for the artifact store. For further details, consult the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === ### ZenML Data Visualization Configuration **Overview**: This documentation outlines how to configure ZenML for displaying data visualizations in the dashboard. **Visualizing Artifacts**: ZenML allows easy association of visualizations with data artifacts. **Example Visualization**: ![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) This setup enhances the ability to monitor and analyze data within the ZenML framework effectively. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md === ### Summary: Registering External Data as ZenML Artifacts This documentation outlines how to register external data (folders or files) as ZenML artifacts for future use, allowing seamless integration into machine learning pipelines. #### Registering an Existing Folder as a ZenML Artifact You can register a folder containing data as a ZenML artifact. The following code snippet demonstrates this process: ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path folder_path = os.path.join(prefix, f"my_test_folder_{uuid4()}") file_path = os.path.join(folder_path, "test_file.txt") os.mkdir(folder_path) with open(file_path, "w") as f: f.write("test") register_artifact(folder_path, name="my_folder_artifact") temp_folder_path = Client().get_artifact_version("my_folder_artifact").load() assert isinstance(temp_folder_path, Path) assert os.path.isdir(temp_folder_path) with open(os.path.join(temp_folder_path, "test_file.txt"), "r") as f: assert f.read() == "test" ``` #### Registering an Existing File as a ZenML Artifact Similarly, you can register a single file as a ZenML artifact: ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path file_path = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") with open(file_path, "w") as f: f.write("test") register_artifact(file_path, name="my_file_artifact") temp_file_path = Client().get_artifact_version("my_file_artifact").load() assert isinstance(temp_file_path, Path) assert not os.path.isdir(temp_file_path) with open(temp_file_path, "r") as f: assert f.read() == "test" ``` #### Registering Checkpoints from a PyTorch Lightning Training Run You can register all checkpoints from a PyTorch Lightning training run as a ZenML artifact: ```python import os from zenml.client import Client from zenml import register_artifact from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint from uuid import uuid4 prefix = Client().active_stack.artifact_store.path root_dir = os.path.join(prefix, uuid4().hex) trainer = Trainer( default_root_dir=root_dir, callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1)] ) trainer.fit(model) register_artifact(root_dir, name="all_my_model_checkpoints") ``` #### Custom Checkpoint Registration To register checkpoints as separate artifact versions, extend the `ModelCheckpoint` class: ```python from zenml.client import Client from zenml import register_artifact from zenml import get_step_context from pytorch_lightning.callbacks import ModelCheckpoint class ZenMLModelCheckpoint(ModelCheckpoint): def __init__(self, artifact_name: str, *args, **kwargs): zenml_model = get_step_context().model self.artifact_name = artifact_name self.default_root_dir = os.path.join(Client().active_stack.artifact_store.path, str(zenml_model.version)) super().__init__(*args, **kwargs) def on_train_epoch_end(self, trainer, pl_module): super().on_train_epoch_end(trainer, pl_module) register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) ``` #### Complete Example: PyTorch Lightning Training with Checkpoint Registration The following code provides a complete pipeline example that includes data loading, model training, and prediction using registered checkpoints: ```python from zenml import step, pipeline from pytorch_lightning import Trainer, LightningModule @step def get_data(): # Load MNIST data ... @step def get_model(): # Define and return the model ... @step def train_model(model, train_loader): # Train the model and register checkpoints ... @step def predict(checkpoint_file): # Load the model from the checkpoint and make predictions ... @pipeline def train_pipeline(): train_loader = get_data() model = get_model() train_model(model, train_loader) predict(get_pipeline_context().model.get_artifact("my_model_ckpts")) if __name__ == "__main__": train_pipeline() ``` This documentation provides a comprehensive guide to registering external data and managing artifacts in ZenML, ensuring that all critical functionalities are preserved for effective use in machine learning workflows. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md === ### Summary: Structuring an MLOps Project An MLOps project typically consists of multiple pipelines, including: - **Feature Engineering Pipeline**: Prepares raw data. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. - **Inference Pipeline**: Runs predictions on trained models, often using pre-processed data from the training pipeline. - **Deployment Pipeline**: Deploys the trained model to production. The structure of these pipelines can vary based on project requirements. A common need is to share artifacts (models, metadata) between pipelines. Two patterns for artifact exchange are described below: #### Pattern 1: Artifact Exchange through `Client` In this pattern, the ZenML Client facilitates the exchange of artifacts between pipelines. For example, a feature engineering pipeline produces datasets that are fetched by a training pipeline: ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): dataset = load_data() train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` **Note**: Artifacts are references in the artifact store, not materialized in memory during the pipeline function. #### Pattern 2: Artifact Exchange through a `Model` This approach uses ZenML Models as references instead of artifact IDs/names. For instance, a training pipeline (`train_and_promote`) creates models, promoting them based on accuracy. The inference pipeline (`do_predictions`) retrieves the latest promoted model without needing artifact IDs: ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model.get_model_artifact("trained_model") inference_data = load_data() predict(model=model, data=inference_data) if __name__ == "__main__": do_predictions() ``` Both approaches are valid; the choice depends on user preference. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/datasets.md === # Custom Dataset Classes and Complex Data Flows in ZenML ## Overview ZenML provides custom Dataset classes to manage complex data flows in machine learning projects, allowing for efficient handling of various data sources and structures. ## Custom Dataset Classes Custom Dataset classes encapsulate data loading, processing, and saving logic. They are beneficial when: 1. Working with multiple data sources (e.g., CSV, databases). 2. Handling complex data structures. 3. Implementing custom processing logic. ### Example Implementation A base `Dataset` class can be extended for different data sources like CSV and BigQuery: ```python from abc import ABC, abstractmethod import pandas as pd from google.cloud import bigquery from typing import Optional class Dataset(ABC): @abstractmethod def read_data(self) -> pd.DataFrame: pass class CSVDataset(Dataset): def __init__(self, data_path: str, df: Optional[pd.DataFrame] = None): self.data_path = data_path self.df = df def read_data(self) -> pd.DataFrame: if self.df is None: self.df = pd.read_csv(self.data_path) return self.df class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): self.table_id = table_id self.project = project self.client = bigquery.Client(project=self.project) def read_data(self) -> pd.DataFrame: return self.client.query(f"SELECT * FROM `{self.table_id}`").to_dataframe() def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config).result() ``` ## Custom Materializers Materializers in ZenML serialize and deserialize artifacts. Custom Materializers are necessary for custom Dataset classes: ### CSVDatasetMaterializer Example ```python from zenml.materializers import BaseMaterializer from zenml.io import fileio import json import tempfile import pandas as pd class CSVDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (CSVDataset,) def load(self, data_type: Type[CSVDataset]) -> CSVDataset: with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: with fileio.open(os.path.join(self.uri, "data.csv"), "rb") as source_file: temp_file.write(source_file.read()) return CSVDataset(temp_file.name).read_data() def save(self, dataset: CSVDataset) -> None: df = dataset.read_data() with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: df.to_csv(temp_file.name, index=False) with open(temp_file.name, "rb") as source_file: with fileio.open(os.path.join(self.uri, "data.csv"), "wb") as target_file: target_file.write(source_file.read()) ``` ### BigQueryDatasetMaterializer Example ```python class BigQueryDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (BigQueryDataset,) def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: with fileio.open(os.path.join(self.uri, "metadata.json"), "r") as f: metadata = json.load(f) return BigQueryDataset(metadata["table_id"], metadata["project"]).read_data() def save(self, bq_dataset: BigQueryDataset) -> None: metadata = {"table_id": bq_dataset.table_id, "project": bq_dataset.project} with fileio.open(os.path.join(self.uri, "metadata.json"), "w") as f: json.dump(metadata, f) bq_dataset.write_data() if bq_dataset.df is not None else None ``` ## Pipeline Management Design flexible pipelines to handle multiple data sources: ### Example Pipeline ```python from zenml import step, pipeline @step(output_materializer=CSVDatasetMaterializer) def extract_data_local(data_path: str = "data/raw_data.csv") -> CSVDataset: return CSVDataset(data_path) @step(output_materializer=BigQueryDatasetMaterializer) def extract_data_remote(table_id: str) -> BigQueryDataset: return BigQueryDataset(table_id) @step def transform(dataset: Dataset) -> pd.DataFrame: df = dataset.read_data() return df.copy() # Apply transformations here @pipeline def etl_pipeline(mode: str = "develop"): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") return transform(raw_data) ``` ## Best Practices 1. **Use a Common Base Class**: The `Dataset` base class standardizes handling different data sources. 2. **Specialized Steps**: Implement separate steps for loading various datasets while keeping processing steps consistent. 3. **Flexible Pipelines**: Use configuration parameters to adapt to different data sources. 4. **Modular Step Design**: Create steps for specific tasks to promote code reuse and maintainability. By following these practices, ZenML pipelines can efficiently manage complex data flows and adapt to evolving project requirements. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === # Scaling Strategies for Big Data in ZenML This documentation outlines strategies for managing large datasets in ZenML, focusing on scaling pipelines as data size increases. ## Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory with pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. ## Strategies for Small Datasets - **Efficient Data Formats**: Use formats like Parquet instead of CSV. ```python import pyarrow.parquet as pq class ParquetDataset(Dataset): def read_data(self) -> pd.DataFrame: return pq.read_table(self.data_path).to_pandas() ``` - **Data Sampling**: Implement sampling methods. ```python class SampleableDataset(Dataset): def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: return self.read_data().sample(frac=fraction) ``` - **Optimize Pandas Operations**: Use efficient operations to minimize memory usage. ```python @step def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: df['new_column'] = df['column1'] + df['column2'] return df ``` ## Handling Medium Datasets ### Chunking for CSV Datasets Implement chunking to process large files. ```python class ChunkedCSVDataset(Dataset): def read_data(self): for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): yield chunk ``` ### Leveraging Data Warehouses Utilize data warehouses like Google BigQuery for distributed processing. ```python @step def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: client = bigquery.Client() query = "SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" query_job = client.query(query) query_job.result() # Wait for completion ``` ## Approaches for Very Large Datasets ### Using Distributed Computing Frameworks #### Apache Spark Initialize Spark within a ZenML step. ```python from pyspark.sql import SparkSession @step def process_with_spark(input_data: str) -> None: spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() df = spark.read.csv(input_data, header=True) result = df.groupBy("column1").agg({"column2": "mean"}) result.write.csv("output_path", header=True) spark.stop() ``` #### Ray Use Ray for distributed processing. ```python import ray @step def process_with_ray(input_data: str) -> None: ray.init() @ray.remote def process_partition(partition): return processed_partition results = ray.get([process_partition.remote(part) for part in partitions]) ray.shutdown() ``` #### Dask Integrate Dask for parallel computing. ```python import dask.dataframe as dd @step def create_dask_dataframe(): return dd.from_pandas(pd.DataFrame({'A': range(1000)}), npartitions=4) ``` #### Numba Utilize Numba for JIT compilation to speed up computations. ```python from numba import jit @jit(nopython=True) def numba_function(x): return x * x + 2 * x - 1 ``` ## Important Considerations 1. **Environment Setup**: Ensure necessary frameworks are installed. 2. **Resource Management**: Coordinate resource allocation with ZenML orchestration. 3. **Error Handling**: Implement proper error handling and cleanup. 4. **Data I/O**: Use intermediate storage for large datasets. 5. **Scaling**: Ensure infrastructure supports the scale of computation. ## Choosing the Right Scaling Strategy Consider dataset size, processing complexity, infrastructure, update frequency, and team expertise when selecting a scaling strategy. Start simple and scale as needed, leveraging ZenML's flexible architecture to manage data processing efficiently. For more details, refer to the section on [custom dataset classes](datasets.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === ### Summary of Unmaterialized Artifacts in ZenML **Overview**: ZenML pipelines are structured around data-centric principles, where each step reads and writes artifacts to an artifact store. Materializers manage the serialization and deserialization of these artifacts. However, there are scenarios where you may want to skip materialization and use a reference to the artifact instead. **Warning**: Skipping materialization can have unintended consequences for downstream tasks. Use this feature only when necessary. ### Skipping Materialization To skip materialization, you can use `UnmaterializedArtifact`, which provides a `uri` property pointing to the artifact's path in the store. This is useful when you need to access the exact storage location of an artifact. **Example of Using Unmaterialized Artifact**: ```python from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import step @step def my_step(my_artifact: UnmaterializedArtifact): pass ``` ### Code Example The following pipeline demonstrates the use of unmaterialized artifacts. Steps `s1` and `s2` produce identical artifacts, but `s3` consumes materialized artifacts while `s4` consumes unmaterialized artifacts, allowing direct access to their URIs. ```python from typing_extensions import Annotated from typing import Dict, List, Tuple from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import pipeline, step @step def step_1() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_2() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_3(dict_: Dict, list_: List) -> None: assert isinstance(dict_, dict) assert isinstance(list_, list) @step def step_4(dict_: UnmaterializedArtifact, list_: UnmaterializedArtifact) -> None: print(dict_.uri) print(list_.uri) @pipeline def example_pipeline(): step_3(*step_1()) step_4(*step_2()) example_pipeline() ``` This example illustrates how `UnmaterializedArtifact` can be utilized in pipeline steps, providing direct access to artifact URIs. For further details on advanced usage, refer to the documentation on triggering pipelines from another pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === It seems that you have provided a placeholder or a title without any specific documentation text to summarize. Please provide the actual documentation content that you would like me to summarize, and I will be happy to assist you! ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === # Summary of ZenML Artifact Loading Documentation ZenML pipelines typically consume artifacts produced by one another, but external data may also be needed. For external artifacts, use `ExternalArtifact`. For data exchange between ZenML pipelines, late materialization is essential, allowing the passing of artifacts that do not yet exist during pipeline compilation. ## Key Use Cases for Artifact Exchange: 1. **Semantic Grouping**: Use ZenML Models to group data products. 2. **Client Methods**: Utilize the ZenML Client for artifact management. **Recommendation**: Use models for artifact access across pipelines. Refer to the documentation on loading artifacts from ZenML Models for guidance. ## Exchanging Artifacts with Client Methods If not using the Model Control Plane, late materialization can still facilitate data exchange. Below is a streamlined version of the `do_predictions` pipeline code: ```python from typing import Annotated from zenml import step, pipeline from zenml.client import Client import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: float, model2_metric: float, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: predictions = pd.Series(model1.predict(data)) if model1_metric < model2_metric else pd.Series(model2.predict(data)) return predictions @step def load_data() -> pd.DataFrame: # load inference data ... @pipeline def do_predictions(): model_42 = Client().get_artifact_version("trained_model", version="42") metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value inference_data = load_data() predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) if __name__ == "__main__": do_predictions() ``` ### Explanation: - The `predict` step compares models based on their MSE metrics to determine which model to use for predictions. - The `load_data` step is responsible for loading inference data. - Calls to `Client().get_artifact_version()` and accessing `run_metadata` are evaluated at execution time, ensuring the latest artifact versions are used. This approach ensures that the most current model and metrics are utilized during the execution of the pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === ### Summary of Documentation Artifacts in ZenML can be accessed not only from direct upstream steps but also from other pipelines. This is facilitated by the ZenML client, which allows fetching of metadata and artifacts. #### Key Points: - Artifacts can be retrieved using the ZenML client within a step. - This capability enables the use of artifacts from different pipelines or steps that are not directly upstream. #### Example Code: ```python from zenml.client import Client from zenml import step @step def my_step(): client = Client() output = client.get_artifact_version("my_dataset", "my_version") return output.run_metadata["accuracy"].value ``` #### Additional Resources: - Refer to the [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md) guide for information on the `ExternalArtifact` type and artifact transfer between steps. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === ### Summary of ZenML Materializers Documentation **Overview**: ZenML pipelines are data-centric, where steps are connected through inputs and outputs. **Materializers** manage how artifacts are serialized and deserialized when passing between steps, ensuring proper storage and retrieval from the artifact store. #### Built-In Materializers ZenML provides several built-in materializers for common data types, which are used automatically without user intervention: | Materializer | Handled Data Types | Storage Format | |--------------|---------------------|----------------| | BuiltInMaterializer | `bool`, `float`, `int`, `str`, `None` | `.json` | | BytesMaterializer | `bytes` | `.txt` | | BuiltInContainerMaterializer | `dict`, `list`, `set`, `tuple` | Directory | | NumpyMaterializer | `np.ndarray` | `.npy` | | PandasMaterializer | `pd.DataFrame`, `pd.Series` | `.csv` or `.gzip` | | PydanticMaterializer | `pydantic.BaseModel` | `.json` | | ServiceMaterializer | `zenml.services.service.BaseService` | `.json` | | StructuredStringMaterializer | `zenml.types.CSVString`, `zenml.types.HTMLString`, `zenml.types.MarkdownString` | `.csv`, `.html`, `.md` | **Warning**: The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across different Python versions. #### Integration Materializers ZenML also supports integration-specific materializers that can be activated by installing the respective integration. Examples include: | Integration | Materializer | Handled Data Types | Storage Format | |-------------|--------------|---------------------|----------------| | bentoml | BentoMaterializer | `bentoml.Bento` | `.bento` | | deepchecks | DeepchecksResultMaterializer | `deepchecks.CheckResult`, `deepchecks.SuiteResult` | `.json` | | huggingface | HFDatasetMaterializer | `datasets.Dataset`, `datasets.DatasetDict` | Directory | | tensorflow | KerasMaterializer | `tf.keras.Model` | Directory | **Note**: For Docker-based orchestrators, specify required integrations in `DockerSettings`. #### Custom Materializers To create a custom materializer: 1. **Define the Materializer**: Subclass `BaseMaterializer`, specify `ASSOCIATED_TYPES`, and implement `load()` and `save()` methods. 2. **Configure Steps**: Use the `output_materializers` parameter in the step decorator or the `configure()` method. **Example**: ```python class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: name = f.read() return MyObj(name=name) def save(self, my_obj: MyObj) -> None: with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: f.write(my_obj.name) @step def my_first_step() -> MyObj: return MyObj("my_object") my_first_step.configure(output_materializers=MyMaterializer) ``` #### Global Materializer Configuration To set a custom materializer globally, register it in the materializer registry: ```python materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) ``` #### Developing a Custom Materializer The `BaseMaterializer` class defines the interface for all materializers, including methods for loading, saving, visualizing, and extracting metadata from artifacts. Override these methods to customize behavior. **Example of `load()` and `save()`**: ```python def load(self, data_type: Type[Any]) -> Any: # Implement loading logic def save(self, data: Any) -> None: # Implement saving logic ``` #### Skipping Materialization Refer to the documentation for details on how to skip materialization. #### Interaction with Custom Artifact Stores If default materializers are incompatible with a custom artifact store, modify the materializer to handle the artifact appropriately. ### Code Example A complete example demonstrates the use of a custom object and materializer in a pipeline: ```python @step def my_first_step() -> MyObj: return MyObj("my_object") @step def my_second_step(my_obj: MyObj) -> None: logging.info(f"The following object was passed to this step: `{my_obj.name}`") @pipeline def first_pipeline(): output_1 = my_first_step() my_second_step(output_1) first_pipeline() ``` This example illustrates how to implement a custom materializer and integrate it into a ZenML pipeline effectively. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/delete-an-artifact.md === ### Deleting Artifacts in ZenML Artifacts cannot be deleted directly to avoid breaking the ZenML database due to dangling references. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: ```shell zenml artifact prune ``` By default, this command removes artifacts from the underlying artifact store and the database. You can modify this behavior with the following flags: - `--only-artifact`: Deletes only the artifact. - `--only-metadata`: Deletes only the metadata. If you encounter errors while pruning artifacts (often due to local storage issues), you can use the `--ignore-errors` flag to proceed with pruning, though warning messages will still be displayed. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifact-versioning.md === ### ZenML Data Storage Overview ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them via the dashboard, enhancing insights and reproducibility in machine learning workflows. #### Artifact Creation and Caching - Each pipeline run generates a new directory in the artifact store for each step. - If a step is new or modified, ZenML creates a unique directory with a new ID and stores data using appropriate materializers. - For unchanged steps, ZenML may cache the results, saving time and computational resources. - This caching allows users to focus on experimenting without rerunning unchanged pipeline parts. - ZenML provides traceability of artifacts back to their origins, ensuring reproducibility and identifying potential issues in pipelines. For artifact versioning and configuration details, refer to the [artifact management documentation](../../../user-guide/starter-guide/manage-artifacts.md). #### Saving and Loading Artifacts with Materializers Materializers are essential for artifact management, handling serialization and deserialization of artifacts stored in the artifact store. Each materializer saves data in unique directories. - ZenML includes built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. - Custom materializers can be created by extending the `BaseMaterializer` class for specific data types or storage systems. - The built-in `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions and potential security vulnerabilities. When pipelines run, ZenML utilizes materializers to save and load artifacts through its `fileio` system, facilitating artifact caching and lineage tracking. For an example of a default materializer, see the [numpy materializer](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md === ### Summary of Documentation on Using `Annotated` for Multiple Outputs The `Annotated` type allows for returning multiple named outputs from a step in a pipeline, enhancing artifact retrieval and dashboard readability. #### Key Points: - **Purpose**: Use `Annotated` to return multiple outputs from a step with specific names. - **Benefits**: Improves artifact identification and enhances the clarity of the pipeline dashboard. #### Code Example: ```python from typing import Annotated, Tuple import pandas as pd from zenml import step from sklearn.model_selection import train_test_split @step def clean_data(data: pd.DataFrame) -> Tuple[ Annotated[pd.DataFrame, "x_train"], Annotated[pd.DataFrame, "x_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: x = data.drop("target", axis=1) y = data["target"] return train_test_split(x, y, test_size=0.2, random_state=42) ``` #### Explanation: - The `clean_data` function takes a `pandas` DataFrame and returns a tuple containing training and testing sets for features (`x_train`, `x_test`) and target labels (`y_train`, `y_test`). - Each output is annotated for easy identification in the pipeline and dashboard. This concise approach allows for efficient data handling and clear visualization of pipeline steps. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md === ### ZenML Tagging Documentation Summary **Overview**: ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow efficiency and asset discoverability. #### Assigning Tags to Artifacts To tag artifacts in ZenML, utilize the `tags` property of `ArtifactConfig` in the Python SDK or the ZenML CLI. **Python SDK Example**: ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> ( Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] ): ... ``` **CLI Commands**: ```shell # Tag an artifact zenml artifacts update iris_dataset -t sklearn # Tag a specific artifact version zenml artifacts versions update iris_dataset raw_2023 -t sklearn ``` Tags like `sklearn` and `pre-training` will be assigned to all artifacts created by this step. ZenML Pro users can also tag artifacts directly in the cloud dashboard. #### Assigning Tags to Models Models can also be tagged for better organization. Tags can be specified as key-value pairs when creating a model version. **Python SDK Example**: ```python from zenml.models import Model tags = ["experiment", "v1", "classification-task"] model = Model( name="iris_classifier", version="1.0.0", tags=tags, ) @pipeline(model=model) def my_pipeline(...): ... ``` **Creating/Updating Models with Tags**: ```python from zenml.client import Client # Register a new model with tags Client().create_model( name="iris_logistic_regression", tags=["classification", "iris-dataset"], ) # Register a new model version with tags Client().create_model_version( model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"], ) ``` **CLI Commands for Existing Models**: ```shell # Tag an existing model zenml model update iris_logistic_regression --tag "classification" # Tag a specific model version zenml model version update iris_logistic_regression 2 --tag "experiment3" ``` **Note**: If a model is implicitly created during a pipeline run, it will not inherit tags from the `Model` class. Tags can be managed via the SDK or ZenML Pro UI. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifacts-naming.md === ### ZenML Artifact Naming Overview In ZenML, managing artifact names is crucial for tracking outputs from pipeline steps, especially when reusing steps with different inputs. ZenML supports both static and dynamic naming strategies for output artifacts, which are determined by type annotations in function definitions. Artifacts with the same name are saved with incremented version numbers. #### Naming Strategies 1. **Static Naming**: Defined directly as string literals. ```python @step def static_single() -> Annotated[str, "static_output_name"]: return "null" ``` 2. **Dynamic Naming**: - **Using Standard Placeholders**: - `{date}`: Current date (e.g., `2024_11_18`) - `{time}`: Current time (e.g., `11_07_09_326492`) ```python @step def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: return "null" ``` - **Using Custom Placeholders**: Custom placeholders can be defined via the `substitutions` parameter. ```python @step(substitutions={"custom_placeholder": "some_substitute"}) def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: return "null" ``` - **Using `with_options`**: ```python @step def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: return "my data" @pipeline def extraction_pipeline(): extract_data.with_options(substitutions={"stage": "train"})(source="s3://train") extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") ``` **Substitution Scope**: - Can be set in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. 3. **Multiple Output Handling**: Combine naming strategies for multiple artifacts. ```python @step def mixed_tuple() -> Tuple[ Annotated[str, "static_output_name"], Annotated[str, "name_{date}_{time}"], ]: return "static_namer", "str_namer" ``` #### Caching Behavior When caching is enabled, the names of output artifacts remain consistent across runs, even if the outputs are generated dynamically. ```python @step(substitutions={"custom_placeholder": "resolution"}) def demo() -> Tuple[ Annotated[int, "name_{date}_{time}"], Annotated[int, "name_{custom_placeholder}"], ]: return 42, 43 @pipeline def my_pipeline(): demo() if __name__ == "__main__": run_without_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=False)() run_with_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=True)() ``` **Output Example**: ``` ['name_2024_11_21_14_27_33_750134', 'name_resolution'] ``` This summary provides a concise overview of artifact naming in ZenML, covering static and dynamic naming strategies, multiple output handling, and caching behavior. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/README.md === ### Summary of ZenML Step Outputs and Data Handling In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Using type annotations for outputs enhances transparency, aids in data passing between steps, and allows for serialization/deserialization (termed 'materialization' in ZenML). #### Key Code Components: 1. **Load Data Step**: - Accepts an integer parameter. - Returns a dictionary with training features and labels. ```python @step def load_data(parameter: int) -> Dict[str, Any]: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} ``` 2. **Train Model Step**: - Takes the dictionary from `load_data`. - Computes total features and labels, and trains a model. ```python @step def train_model(data: Dict[str, Any]) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") ``` 3. **Pipeline Definition**: - Chains the `load_data` and `train_model` steps. ```python @pipeline def simple_ml_pipeline(parameter: int): dataset = load_data(parameter=parameter) train_model(dataset) ``` This structure illustrates the flow of data between steps in a ZenML pipeline, emphasizing the importance of type annotations for effective data management. ================================================== === File: docs/book/getting-started/core-concepts.md === # ZenML Core Concepts Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines, facilitating collaboration among data scientists, ML engineers, and MLOps developers. The framework is built around three main threads: **Development**, **Execution**, and **Management**. ## 1. Development ### Steps - **Steps** are functions decorated with `@step`. They can have typed inputs and outputs. ```python @step def step_1() -> str: return "world" @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> str: return f"{input_one} {input_two}" ``` ### Pipelines - **Pipelines** consist of a series of steps, defined using the `@pipeline` decorator. Steps can use outputs from previous steps or direct values. ```python @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) if __name__ == "__main__": my_pipeline() ``` ### Artifacts - **Artifacts** are data tracked and stored by ZenML, produced by steps as inputs and outputs. They are serialized and deserialized using **Materializers**. ### Models - **Models** represent outputs of training processes, including weights and metadata. They are managed centrally via the ZenML API. ### Materializers - **Materializers** handle the serialization/deserialization of artifacts, allowing custom implementations for unsupported data types. ### Parameters & Settings - Steps can take parameters, which ZenML tracks to maintain experiment iterations. **Settings** configure runtime parameters for infrastructure and pipelines. ### Model Versions - A **Model** can have multiple versions, linking various entities to a unified view. ## 2. Execution ### Stacks & Components - A **Stack** is a collection of components (e.g., orchestrators, artifact stores) necessary for pipeline execution. Each component is extensible. ### Orchestrator - The **Orchestrator** coordinates the execution of steps within a pipeline, managing dependencies and execution order. ### Artifact Store - The **Artifact Store** houses all artifacts, tracking and versioning them for features like data caching. ### Flavor - **Flavors** are tailored solutions built on base abstractions for each stack component type, allowing for custom implementations. ### Stack Switching - ZenML allows easy switching between local and cloud stacks via a CLI command, separating pipeline code from stack configurations. ## 3. Management ### ZenML Server - A **ZenML Server** is required for remote stack components, managing entities like pipelines and models, and tracking metadata. ### Server Deployment - Users can deploy a ZenML server via the **ZenML Pro SaaS** or self-hosting options. ### Metadata Tracking - The ZenML Server tracks metadata for pipeline runs, aiding in troubleshooting. ### Secrets Management - The server acts as a centralized secrets store for sensitive data, configurable with various backends (e.g., AWS Secrets Manager). ### Collaboration - ZenML supports collaboration among team members, allowing sharing of pipelines and resources through the ZenML Server. ### Dashboard - The **ZenML Dashboard** visualizes pipelines and stacks, facilitating collaboration and resource sharing. ### VS Code Extension - A **VS Code extension** allows interaction with ZenML stacks and pipelines directly from the editor, enhancing workflow efficiency. This summary encapsulates the essential concepts and functionalities of ZenML, enabling effective understanding and application in MLOps workflows. ================================================== === File: docs/book/getting-started/system-architectures.md === # ZenML System Architecture Overview This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their respective components. ## ZenML OSS (Self-hosted) - **ZenML OSS Server**: A FastAPI application managing metadata for pipelines and artifacts. - **OSS Metadata Store**: Stores all tenant metadata, including ML tracking and versioning. - **OSS Dashboard**: A ReactJS application displaying pipelines and runs. - **Secrets Store**: Securely stores credentials for accessing customer infrastructure. ZenML OSS is available under the Apache 2.0 license. For deployment instructions, refer to the [deployment guide](./deploying-zenml/README.md). ## ZenML Pro (SaaS or Self-hosted) ZenML Pro enhances OSS with additional components: - **ZenML Pro Control Plane**: Central management for all tenants. - **Pro Dashboard**: An upgraded dashboard with additional features. - **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management. - **Pro Add-ons**: Python modules for enhanced functionality. - **Identity Provider**: Supports flexible authentication, integrating with Auth0 for cloud deployments or custom OIDC for self-hosted setups. ZenML Pro can be easily integrated with existing ZenML OSS deployments. ### ZenML Pro SaaS Architecture In the SaaS model, ZenML services are hosted by ZenML, with customer secrets managed by the ZenML Pro Control Plane. ML metadata is stored on ZenML infrastructure, while actual ML data artifacts reside in the customer's cloud. For sensitive metadata, a hybrid option allows customers to store secrets on their side while connecting to the ZenML server. ### ZenML Pro Self-Hosted Architecture For self-hosted deployments, all services and data are managed on the customer's infrastructure, ensuring maximum security. For more details on core concepts, refer to the respective guides for [ZenML OSS](../getting-started/core-concepts.md) and [ZenML Pro](../getting-started/zenml-pro/core-concepts.md). Interested in ZenML Pro? [Sign up](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) for a free trial. ================================================== === File: docs/book/getting-started/installation.md === # ZenML Installation and Getting Started ## Installation **ZenML** is a Python package installable via `pip`: ```shell pip install zenml ``` **Supported Python Versions:** ZenML works with **Python 3.9, 3.10, 3.11, and 3.12**. ## Dashboard Installation To access the ZenML web dashboard locally, install optional dependencies: ```shell pip install "zenml[server]" ``` **Recommendation:** Use a virtual environment (e.g., [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv)). ## MacOS with Apple Silicon (M1, M2) Set this environment variable for local server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is not needed if using ZenML as a client. ## Nightly Builds For the latest unstable version, install the nightly build: ```shell pip install zenml-nightly ``` ## Verifying Installation Check installation success via Bash or Python: ```bash zenml version ``` ```python import zenml print(zenml.__version__) ``` ## Docker Usage ZenML is available as a Docker image: Start a bash environment: ```shell docker run -it zenmldocker/zenml /bin/bash ``` Run the ZenML server: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` ## Deploying the Server To run ZenML with a local dashboard: ```shell pip install "zenml[server]" zenml login --local ``` For advanced features, deploy a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or signing up for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. ================================================== === File: docs/book/getting-started/zenml-pro/teams.md === ### Teams in ZenML Pro Overview ZenML Pro introduces **Teams** to efficiently manage groups of users within your organization and tenants. A team acts as a single entity, simplifying user management and access control. #### Key Benefits of Teams 1. **Group Management**: Manage permissions for multiple users simultaneously. 2. **Organizational Structure**: Align teams with your company's structure or projects. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. #### Creating and Managing Teams - **Creation Steps**: 1. Navigate to Organization settings. 2. Click on the "Teams" tab. 3. Use "Add team" to create a new team. **Required Information**: - Team name - Description (optional) - Initial team members #### Adding Users to Teams 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. 3. Click "Add Members." 4. Choose users to add. #### Assigning Teams to Tenants 1. Go to the tenant settings page. 2. Click on the "Members" tab and then the "Teams" tab. 3. Select "Add Team." 4. Choose the team and assign a role. #### Team Roles and Permissions When a role (e.g., Admin, Editor, Viewer) is assigned to a team within a tenant, all team members inherit the associated permissions. #### Best Practices for Using Teams 1. **Reflect Your Organization**: Create teams that mirror your structure. 2. **Combine with Custom Roles**: Use custom roles for detailed access control. 3. **Regular Audits**: Review team memberships and roles periodically. 4. **Document Team Purposes**: Keep clear documentation on each team's objectives and projects. Utilizing Teams in ZenML Pro enhances user management, access control, and organization of MLOps workflows across your organization and tenants. ================================================== === File: docs/book/getting-started/zenml-pro/organization.md === # Organizations in ZenML Pro ZenML Pro organizes your work experience around the concept of an **Organization**, the top-level structure in the ZenML Cloud environment. An organization typically includes a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Once part of the organization, users can log in to all accessible tenants. ## Managing Organization Settings Organization settings, including billing and member roles, are managed at the organization level. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations Additional operations related to Organizations can be performed via the API. More details are available at [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === # ZenML Pro Self-Hosted Deployment Guide Summary ## Overview ZenML Pro can be self-hosted in a Kubernetes cluster. Access to ZenML Pro container images is required, along with infrastructure components like a Kubernetes cluster, database server, load balancer, Ingress controller, HTTPS certificates, and DNS rules. Note that Single Sign-On (SSO) and Run Templates are not available in the on-prem version. ## Prerequisites ### Software Artifacts - **ZenML Pro Control Plane Artifacts**: - AWS: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api`, `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-api`, `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-dashboard` - Helm chart: `oci://public.ecr.aws/zenml/zenml-pro` - **ZenML Pro Tenant Server Artifacts**: - AWS: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-server` - Helm chart: `oci://public.ecr.aws/zenml/zenml` - **ZenML Pro Client Artifacts**: Public image at `zenmldocker/zenml` on Docker Hub. ### Accessing Container Images - **AWS**: Create an IAM user/role with `AmazonEC2ContainerRegistryReadOnly` policy. Authenticate Docker client using AWS CLI. - **GCP**: Create a service account with access to the Artifact Registry. Authenticate Docker client using Google Cloud CLI. ### Air-Gapped Installation For air-gapped environments, download all required artifacts on a machine with internet access, save them, and transfer to the air-gapped environment. Load the artifacts and configure the Helm charts accordingly. ## Infrastructure Requirements 1. **Kubernetes Cluster**: Required for deployment. 2. **Database Server**: MySQL for tenant servers; either MySQL or Postgres for the control plane. 3. **Ingress Controller**: For HTTP(S) traffic routing (e.g., NGINX, Traefik). 4. **Domain Name**: FQDN for the control plane and tenants. 5. **SSL Certificate**: Required for secure communication. ## Installation Steps ### Stage 1: Install ZenML Pro Control Plane 1. **Set up Credentials**: Create a Kubernetes secret for image pull access. 2. **Configure Helm Chart**: Customize `values.yaml` for deployment settings. 3. **Install Helm Chart**: ```bash helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml ``` ### Stage 2: Enroll and Deploy ZenML Pro Tenants 1. **Enroll a Tenant**: Use `enroll-tenant.py` script to create a tenant entry and generate a Helm values file. 2. **Configure Tenant Helm Chart**: Fill in necessary values in the generated YAML file. 3. **Deploy Tenant Server**: ```bash helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values ``` ## Day 2 Operations: Upgrades and Updates 1. **Upgrade ZenML Pro Control Plane**: Upgrade using Helm with either existing values or modified values. 2. **Upgrade Tenant Servers**: Similar process as the control plane, ensuring to upgrade the control plane first. This guide provides essential steps and configurations for deploying and managing ZenML Pro in a self-hosted environment, ensuring all critical information is retained while maintaining conciseness. ================================================== === File: docs/book/getting-started/zenml-pro/core-concepts.md === # ZenML Pro Core Concepts ZenML Pro features a distinct entity hierarchy compared to the open-source version. Below are the key components: - **Organization**: A collection of users, teams, and tenants. - **Tenant**: An isolated ZenML server deployment containing all project resources. - **Teams**: Groups of users within an organization for resource management. - **Users**: Individual accounts on a ZenML Pro instance. - **Roles**: Control user actions within a tenant or organization. - **Templates**: Configurable pipeline runs that can be re-executed. For detailed information, refer to the linked pages: | **Concept** | **Description** | **Link** | |-------------------|-------------------------------------------------|-----------------------| | Organizations | Managing organizations in ZenML Pro. | [organization.md](./organization.md) | | Tenants | Working with tenants in ZenML Pro. | [tenants.md](./tenants.md) | | Teams | Team management in ZenML Pro. | [teams.md](./teams.md) | | Roles & Permissions| Role-based access control in ZenML Pro. | [roles.md](./roles.md) | ================================================== === File: docs/book/getting-started/zenml-pro/pro-api.md === # ZenML Pro API Overview The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources, including tenant, organization, user, and role management. The SaaS version is accessible at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ## Authentication To authenticate requests: - **Browser Authentication**: If logged into ZenML Pro, use the same session for API requests. - **API Tokens**: For programmatic access, generate API tokens valid for 1 hour. ### Generating API Tokens 1. Go to organization settings in the ZenML Pro dashboard. 2. Select "API Tokens" from the sidebar. 3. Click "Create new token" and use the token as a bearer token in HTTP requests. **Example using curl**: ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` **Important Notes**: - Tokens expire after 1 hour and cannot be retrieved post-generation. - Tokens are user-scoped and inherit permissions. ### Tenant Programmatic Access Access the ZenML Pro tenant API via: - Temporary API tokens - Service account API keys ## Key API Endpoints ### Tenant Management - List tenants: `GET /tenants` - Create a tenant: `POST /tenants` - Get tenant details: `GET /tenants/{tenant_id}` - Update a tenant: `PATCH /tenants/{tenant_id}` ### Organization Management - List organizations: `GET /organizations` - Create an organization: `POST /organizations` - Get organization details: `GET /organizations/{organization_id}` - Update an organization: `PATCH /organizations/{organization_id}` ### User Management - List users: `GET /users` - Get current user: `GET /users/me` - Update user: `PATCH /users/{user_id}` ### Role-Based Access Control - Create a role: `POST /roles` - Assign a role: `POST /roles/{role_id}/assignments` - Check permissions: `GET /permissions` ## Error Handling The API uses standard HTTP status codes. Error responses include messages and additional details. ## Rate Limiting The API may enforce rate limits. Exceeding limits results in a 429 status code. Implement backoff and retry logic accordingly. For complete API documentation, visit [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/roles.md === # ZenML Pro: Roles and Permissions ZenML Pro employs a role-based access control (RBAC) system for managing permissions within organizations and tenants. This guide outlines the available roles, assignment processes, and the creation of custom roles. ## Organization-Level Roles ZenML Pro provides three predefined organization roles: 1. **Org Admin**: Full control, can manage members, tenants, billing, and assign roles. 2. **Org Editor**: Can manage tenants and teams but lacks access to subscription info and cannot delete the organization. 3. **Org Viewer**: Read-only access to tenants. ### Assigning Organization Roles To assign roles: 1. Go to the Organization settings page. 2. Click on the "Members" tab to update roles or add new members. **Notes**: - Organization admins can add themselves to any tenant role. - Editors and viewers cannot join tenants they are not part of. - Custom organization roles can be created via the [ZenML Pro API](https://cloudapi.zenml.io/). ## Tenant-Level Roles Tenant roles dictate user permissions within a specific ZenML tenant. Predefined roles include: 1. **Admin**: Full control over the tenant's resources. 2. **Editor**: Can create and share resources but cannot modify or delete them. 3. **Viewer**: Read-only access to resources. ### Custom Roles To create a custom tenant role: 1. Access the tenant settings page. 2. Click "Roles" and select "Add Custom Role." 3. Define the role's name, description, and base role for permissions. 4. Edit permissions for various resources (e.g., Artifacts, Models, Pipelines). ### Managing Role Permissions To manage permissions: 1. Go to the Roles page in tenant settings. 2. Select the role and click "Edit Permissions." 3. Adjust permissions as needed. ## Sharing Individual Resources Users can share specific resources through the dashboard. ## Best Practices 1. **Least Privilege**: Assign minimal necessary permissions. 2. **Regular Audits**: Review role assignments periodically. 3. **Use Custom Roles**: Tailor roles for specific team needs. 4. **Document Roles**: Keep records of custom roles and their purposes. By utilizing ZenML Pro's RBAC, teams can maintain security while promoting collaboration in MLOps projects. ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === ### ZenML Pro Tenants Overview **Tenants** are isolated deployments of the ZenML server, each with its own users, roles, and resources. All ZenML Pro functionalities, including pipelines, stacks, and runs, are scoped to a tenant. The ZenML server in a tenant includes all open-source features plus additional Pro features. #### Creating a Tenant To create a tenant: 1. Navigate to your organization page. 2. Click "+ New Tenant". 3. Enter a name and click "Create Tenant". You can also create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. #### Organizing Tenants Effective tenant organization is vital for managing MLOps infrastructure. Consider these two dimensions: 1. **By Development Stage**: - **Staging Tenants**: For development, testing, and experimentation. - **Production Tenants**: For live services, with stricter access controls and monitoring. 2. **By Business Logic**: - **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System, NLP). - **Team-based**: Align tenants with organizational teams (e.g., Data Science Team). - **Data Sensitivity**: Separate tenants based on data classification (e.g., Public, Internal, Confidential). #### Best Practices for Tenant Organization - **Naming Conventions**: Use clear, descriptive names. - **Access Control**: Implement role-based access control. - **Documentation**: Maintain clear records of tenant purposes. - **Regular Reviews**: Periodically assess tenant structure. - **Scalability**: Design for future growth. #### Using Your Tenant Tenants enable running pipelines, conducting experiments, and utilizing Pro-only features such as: - Model Control Plane - Artifact Control Plane - Dashboard pipeline execution - Pipeline run templates #### Accessing Tenant Documentation Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for the list of available methods, including pipeline execution via the REST API. For more details, refer to the API documentation [here](../../reference/api-reference.md). ================================================== === File: docs/book/getting-started/zenml-pro/README.md === # ZenML Pro Overview ZenML Pro enhances the Open Source ZenML product with several key features: - **Managed Deployment**: Deploy multiple ZenML servers (tenants) easily. - **User Management**: Create organizations and teams for scalable user management. - **Role-Based Access Control**: Implement customizable roles for secure resource management. - **Model and Artifact Control**: Utilize the Model Control Plane and Artifact Control Plane for effective tracking and management of ML assets. - **Triggers and Run Templates**: Create and run pipeline templates via the dashboard or API for quick iterations. - **Early-Access Features**: Access pro-specific features like triggers, filters, and usage reports. For more information, visit the [ZenML website](https://zenml.io/pro). ## Deployment Scenarios ZenML Pro can be deployed as: - **SaaS**: Simplifies server management, allowing focus on MLOps workflows. - **Self-Hosted**: Fully deployable on your infrastructure. Refer to the [self-hosted deployment guide](./self-hosted.md) for details. ### Key Resources - [Tenants](./tenants.md) - [Organizations](./organization.md) - [Teams](./teams.md) - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) To assess ZenML Pro, create a [free account](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). ================================================== === File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md === ### Custom Secret Stores in ZenML The secrets store is essential for managing secrets in ZenML, responsible for storing, updating, and deleting secret values, while the metadata is stored in an SQL database. The interface for secrets stores is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` module. #### Secrets Store Interface ```python class SecretsStoreInterface(ABC): @abstractmethod def _initialize(self) -> None: """Initialize the secrets store.""" @abstractmethod def store_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Store secret values for a new secret.""" @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: """Retrieve secret values for an existing secret.""" @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Update secret values for an existing secret.""" @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Delete secret values for an existing secret.""" ``` #### Creating a Custom Secrets Store To implement a custom secrets store: 1. **Inherit from Base Class**: Create a class that inherits from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the methods from `SecretsStoreInterface`. Set `SecretsStoreType.CUSTOM` as the `TYPE`. 2. **Configuration Class**: If configuration is needed, inherit from `SecretsStoreConfiguration` and define your parameters. Use this as the `CONFIG_TYPE`. 3. **Server Configuration**: Ensure your code is included in the ZenML server's container image. Configure the server to use your custom secrets store via environment variables or helm chart values, as detailed in the deployment guide. For complete documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-docker.md === ### Summary: Deploying ZenML in a Docker Container **Overview**: The ZenML server can be deployed using the Docker container image `zenmldocker/zenml-server`. This guide covers configuration options and deployment methods, including local testing and advanced setups. #### Local Deployment To quickly deploy ZenML locally using Docker, ensure Docker is installed and run: ```bash zenml login --local --docker ``` This command starts a ZenML server in a local Docker container, sharing an SQLite database. #### Configuration Options When deploying a custom ZenML server, configure the following environment variables: - **ZENML_STORE_URL**: Points to an SQLite or MySQL database. - SQLite: `sqlite:////path/to/zenml.db` - MySQL: `mysql://username:password@host:port/database` - **SSL Variables** (for MySQL with SSL): - **ZENML_STORE_SSL_CA** - **ZENML_STORE_SSL_CERT** - **ZENML_STORE_SSL_KEY** - **ZENML_STORE_SSL_VERIFY_SERVER_CERT** (default: `False`) - **Logging**: - **ZENML_LOGGING_VERBOSITY**: Set log level (default: `INFO`). - **Backup Strategy**: - **ZENML_STORE_BACKUP_STRATEGY**: Options include `in-memory`, `database`, and `dump-file`. - **Rate Limiting**: - **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enables rate limiting (default: `0`). - **ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE**: Requests allowed per minute (default: `5`). - **Secrets Management**: Configure various secret store types (SQL, AWS, GCP, Azure, HashiCorp, Custom) using `ZENML_SECRETS_STORE_TYPE` and related variables. #### Running the ZenML Server Start the ZenML server with default settings: ```bash docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server ``` Access the dashboard at `http://localhost:8080` and create an initial admin user. #### Persisting Data To persist the SQLite database, mount a host directory: ```bash mkdir zenml-server docker run -it -d -p 8080:8080 --name zenml \ --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ zenmldocker/zenml-server ``` #### Using MySQL Run a MySQL container: ```bash docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 ``` Connect ZenML to MySQL: ```bash docker run -it -d -p 8080:8080 --name zenml \ --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ zenmldocker/zenml-server ``` #### Docker Compose For multi-container setups, use Docker Compose with a `docker-compose.yml` file: ```yaml version: "3.9" services: mysql: image: mysql:8.0 environment: - MYSQL_ROOT_PASSWORD=password zenml: image: zenmldocker/zenml-server environment: - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml ``` Start with: ```bash docker compose up -d ``` #### Backup and Recovery ZenML automatically backs up the database before migrations. Configure backup strategies using `ZENML_STORE_BACKUP_STRATEGY`. #### Troubleshooting Check logs for server status: - CLI: `zenml logs -f` - Docker: `docker logs zenml -f` - Docker Compose: `docker compose logs -f` This guide provides essential commands and configurations for deploying and managing a ZenML server in a Docker environment. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === ### Deploying ZenML to Hugging Face Spaces **Overview**: Hugging Face Spaces allows for quick, free deployment of ZenML, an ML project and workflow platform. For production, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to avoid data loss. **Deployment Steps**: 1. **Create a Space**: Click [here](https://huggingface.co/new-space?template=zenml/zenml) to set up your ZenML app. Specify: - Owner (personal or organization) - Space name - Visibility (set to 'Public' for local connections) 2. **Machine Selection**: Choose a higher-tier machine for persistent uptime. For a MySQL database connection, configure accordingly. 3. **Customize Space**: Modify the README.md in "Files and Versions" for appearance settings. Refer to the [Hugging Face documentation](https://huggingface.co/docs/hub/spaces-config-reference) for configuration parameters. 4. **Monitor Status**: After creation, watch for 'Building' status to change to 'Running'. Refresh if the ZenML login UI is not visible. 5. **Access Direct URL**: Click the three-dot menu to "Embed this Space" and copy the "Direct URL" (format: `https://-.hf.space`). Use this URL to initialize your ZenML server and set up an admin account. **Connecting from Local Machine**: - Use the 'Direct URL' with the command: ```shell zenml login '' ``` - Access the ZenML dashboard in fullscreen via the Direct URL. **Configuration Options**: - Default uses SQLite; for persistence, modify the `Dockerfile` in the root directory. Refer to [advanced configuration options](deploy-with-docker.md#advanced-server-configuration-options) for details. - Use Hugging Face's 'Repository secrets' for managing secrets in your `Dockerfile`. **Security Note**: Update your ZenML server password via the Dashboard settings to secure access if using a cloud secrets backend. **Troubleshooting**: Access logs via the "Open Logs" button for server issues. For further assistance, contact support on the [Slack channel](https://zenml.io/slack/). **Upgrading ZenML Server**: - The default space updates automatically. For manual updates, use 'Factory reboot' in the 'Settings' tab (note: this wipes data unless using a persistent MySQL database). To revert to an earlier version, adjust the `FROM` statement in the `Dockerfile`. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === ### Summary: Deploying ZenML in a Kubernetes Cluster with Helm #### Overview ZenML can be deployed in a Kubernetes cluster using a Helm chart. The chart is available on the [ArtifactHub repository](https://artifacthub.io/packages/helm/zenml/zenml), which includes templates, default values, and installation instructions. #### Prerequisites - A Kubernetes cluster - Recommended: MySQL-compatible database (version 8.0+) - Installed and configured [Kubernetes client](https://kubernetes.io/docs/tasks/tools/#kubectl) - Installed [Helm](https://helm.sh/docs/intro/install/) - Optional: External Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager) #### ZenML Helm Configuration - Review the [`values.yaml` file](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) for customizable settings. - Prepare information for your database and secrets management service. ##### Database Configuration Using an external MySQL-compatible database is recommended for production: - Hostname, port, username, password, and database name are required. - SSL certificates may be needed for secure connections. ##### Secrets Management Configuration If using an external secrets management service, gather the following based on the provider: - **AWS**: Region, access key ID, secret access key. - **GCP**: Project ID, service account with access. - **Azure**: Key Vault name, tenant ID, client ID, client secret. - **HashiCorp Vault**: Server URL, access token. #### Optional Cluster Services - **Ingress**: Use `nginx-ingress` for HTTP exposure; required for HTTPS. - **cert-manager**: For managing TLS certificates. #### ZenML Helm Installation 1. **Configure the Helm Chart**: - Pull the chart: ```bash helm pull oci://public.ecr.aws/zenml/zenml --version --untar ``` - Create a `custom-values.yaml` file based on `values.yaml` for your configuration. 2. **Install the Helm Chart**: ```bash helm -n install zenml-server . --create-namespace --values custom-values.yaml ``` 3. **Activate the ZenML Server**: - Visit the ZenML server URL to create an admin account. - Connect your local client: ```bash zenml login https://zenml.example.com:8080 --no-verify-ssl ``` #### Deployment Scenarios - **Minimal Deployment**: Uses SQLite, not exposed to the internet. ```yaml zenml: ingress: enabled: false ``` Access via port-forwarding: ```bash kubectl -n zenml-server port-forward svc/zenml-server 8080:8080 zenml login http://localhost:8080 ``` - **Basic Deployment**: Uses local database with Ingress and TLS. Install `cert-manager` and `nginx-ingress`: ```bash helm repo add jetstack https://charts.jetstack.io helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace ``` - **Shared Ingress Controller**: Solutions for URL path conflicts include using a dedicated hostname or path. #### Secrets Store Configuration - Default is SQL database; configure for external services (AWS, GCP, Azure, HashiCorp). - Backup secrets store can be configured similarly. #### Database Backup and Recovery - Automated backups before upgrades; not a long-term solution. - Configure backup strategy via `zenml.database.backupStrategy`. #### Custom CA Certificates and Proxy Configuration - Provide custom CA certificates directly or via Kubernetes secrets. - Configure HTTP proxy settings if needed. This summary captures the essential steps and configurations for deploying ZenML in a Kubernetes environment using Helm, ensuring that critical information is retained while maintaining conciseness. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md === ### Summary: Deploying ZenML with Custom Docker Images **Overview**: Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, but custom images may be necessary for specific scenarios, such as enabling artifact visualizations or using a forked ZenML repository. **Important Note**: Custom Docker images can only be deployed using [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md). ### Steps to Build and Push a Custom ZenML Server Docker Image 1. **Set Up a Container Registry**: Create a free account on a registry like [Docker Hub](https://hub.docker.com/). 2. **Clone ZenML Repository**: Checkout the desired branch (e.g., version 0.41.0): ```bash git checkout release/0.41.0 ``` 3. **Copy the Base Dockerfile**: ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` 4. **Modify the Dockerfile**: - Add dependencies: ```bash RUN pip install ``` - For forks, install local files: ```bash RUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] ``` 5. **Build and Push the Image**: ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` ### Deploy ZenML with Your Custom Image #### Via Docker 1. Refer to the [ZenML Docker Deployment Guide](deploy-with-docker.md). 2. Replace `zenmldocker/zenml-server` with your custom image: ```bash docker run -it -d -p 8080:8080 --name zenml /: ``` 3. For `docker-compose`, modify `docker-compose.yml`: ```yaml services: zenml: image: /: ``` #### Via Helm 1. Refer to the [ZenML Helm Deployment Guide](deploy-with-helm.md). 2. Modify the `image` section in `values.yaml`: ```yaml zenml: image: repository: / tag: ``` This concise guide provides the necessary steps and commands for deploying ZenML using custom Docker images, ensuring all critical information is retained. ================================================== === File: docs/book/getting-started/deploying-zenml/README.md === # Deploying ZenML ## Overview Deploying ZenML to a production environment provides benefits such as: 1. **Scalability**: Handles large-scale workloads for faster results. 2. **Reliability**: Ensures high availability and fault tolerance. 3. **Collaboration**: Facilitates teamwork and model iteration. ## Components A ZenML deployment includes: - **FastAPI server** with SQLite or MySQL database - **Python Client** for server interaction - **ReactJS dashboard** (optional) - **ZenML Pro API + Database + Dashboard** (optional) ### ZenML Python Client The ZenML client is a Python package installed via `pip`. It provides a command-line interface (`zenml`) for managing stacks and pipelines. The Python SDK allows for custom automations and metadata access. Full API documentation is available by appending `/doc` to the server URL. ## Deployment Scenarios Initially, ZenML runs locally with an SQLite database, limiting access to cloud components. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally for team access and cloud component integration. ## Deployment Options 1. **Managed Deployment**: Use ZenML Pro for a managed server, ensuring data security and reduced server management. 2. **Self-hosted Deployment**: Deploy ZenML using methods like Docker, Helm, or HuggingFace Spaces, maintaining control over your infrastructure. ### Deployment Guides - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) - [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) This documentation provides essential information for deploying ZenML, enhancing machine learning workflows for production success. ================================================== === File: docs/book/getting-started/deploying-zenml/secret-management.md === ### Secret Store Configuration and Management #### Centralized Secrets Store ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata (e.g., name, ID, owner) is stored in the ZenML server database, while actual secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in a local SQLite database, whereas remote deployments use the configured secrets management back-end. **Supported Secrets Store Back-ends:** - Default SQL database (same as ZenML server) - AWS Secrets Manager - GCP Secret Manager - Azure Key Vault - HashiCorp Vault - Custom implementations #### Configuration and Deployment Configuring the secrets store back-end occurs at deployment. This includes selecting a back-end, authentication mechanisms, and providing necessary credentials. The ZenML secrets store utilizes the ZenML Service Connector for authentication, promoting the principle of least privilege. The configuration can be updated and redeployed at any time, allowing for easy switching between back-ends. Following the documented migration strategy is recommended to minimize downtime and ensure existing secrets are migrated properly. #### Backup Secrets Store ZenML can connect to a secondary Secrets Store for high availability, backup, and disaster recovery. The primary store is accessed first, with the backup kept in sync. If the primary store is unreachable, the server will attempt to access the backup. **CLI Commands:** - `zenml secret backup`: Backs up secrets from the primary to the backup store. - `zenml secret restore`: Restores secrets from the backup to the primary store. #### Secrets Migration Strategy To change the provider or location of secrets, a migration strategy is required to ensure existing secrets are accessible. The migration process involves: 1. Configuring the ZenML server to use the new store as the secondary store. 2. Redeploying the server. 3. Using `zenml secret backup` to transfer secrets from the primary to the secondary store. 4. Setting the new store as the primary and removing the old one. 5. Redeploying the server. This strategy is unnecessary if only credentials or authentication methods change without altering the secrets' location. For further deployment details, refer to the ZenML deployment guide. ==================================================