=== File: docs/book/introduction.md === # ZenML Documentation Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It separates infrastructure from code, facilitating collaboration among developers. ## Key Features ### For MLOps Platform Engineers - **ZenML Pro**: Offers a managed control plane with features like CI/CD, Model Control Plane, and RBAC. - **Self-hosted Deployment**: Deploy ZenML on any cloud provider using Terraform utilities. ```bash zenml stack register --provider aws zenml stack deploy --provider gcp ``` - **Standardization**: Register environments as ZenML stacks for consistent MLOps tooling. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... ``` - **No Vendor Lock-In**: Easily switch between cloud providers. ```bash zenml stack set gcp python run.py # Run in GCP zenml stack set aws python run.py # Run in AWS ``` ### For Data Scientists - **Local Development**: Develop ML models locally and switch to production seamlessly. ```bash python run.py # Local development zenml stack set production python run.py # Run on production ``` - **Pythonic SDK**: Use decorators to create pipelines. ```python from zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: print(f"{input_one} {input_two}") @pipeline def my_pipeline(): step_2(input_one="hello", input_two=step_1()) my_pipeline() ``` - **Automatic Metadata Tracking**: Tracks metadata of runs and versions datasets/models. ### For ML Engineers - **ML Lifecycle Management**: Manage ML workflows and environments efficiently. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` - **Reproducibility**: Automatically track and version stacks, pipelines, and artifacts. - **Automated Deployments**: Define workflows as ZenML pipelines for easy deployment. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @pipeline def my_pipeline(): data = data_loader_step() model = model_trainer_step(data) seldon_model_deployer_step(model) ``` ## Additional Resources - **Learn More**: Explore guides on production setup, core concepts, and examples through the ZenML documentation. ZenML integrates with popular tools like Weights & Biases, MLflow, and Neptune for enhanced experiment tracking and reproducibility. ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === ### ZenML Model Control Plane Overview **ZenML Model Definition**: - A `Model` in ZenML is an entity that groups pipelines, artifacts, metadata, and business data, encapsulating the business logic of an ML product. It includes technical models (model files with weights and parameters), training data, and predictions. **Model Management**: - Models are first-class citizens in ZenML, accessible via the ZenML API and the ZenML Pro dashboard. - **CLI Commands**: - List models: `zenml model list` - List model versions: `zenml model version list ` - List associated pipeline runs and artifacts: - `zenml model version runs ` - `zenml model version data_artifacts ` - `zenml model version model_artifacts ` - `zenml model version deployment_artifacts ` ### Configuring a Model in a Pipeline - To link artifacts generated during a pipeline run to a model, pass a `Model` object in the pipeline or step configuration. This provides lineage tracking. **Example Code**: ```python from zenml import pipeline, Model model = Model(name="iris_classifier", version=None, license="Apache 2.0", description="A classification model for the iris dataset.") @step(model=model) def svc_trainer(...): ... @pipeline(model=model) def training_pipeline(gamma: float = 0.002): X_train, y_train = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` ### Fetching the Model in a Pipeline - Models can be accessed via `StepContext` or `PipelineContext`. **Example Code**: ```python from zenml import get_step_context, get_pipeline_context, step, pipeline @step def svc_trainer(X_train, y_train, gamma=0.001): model = get_step_context().model @pipeline(model=Model(name="iris_classifier", version="production")) def training_pipeline(gamma=0.002): model = get_pipeline_context().model ``` ### Logging Metadata to the Model - Metadata can be logged to a model using `log_model_metadata`. **Example Code**: ```python from zenml import get_step_context, step, log_model_metadata @step def svc_trainer(X_train, y_train, gamma=0.001): model = get_step_context().model log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` ### Model Stages - Models can exist in various stages: `staging`, `production`, `latest`, and `archived`. **Example Code**: ```python from zenml import Model model = Model(name="iris_classifier", version="latest") model.set_stage(stage="production", force=True) ``` **CLI Commands**: ```shell zenml model version list --stage staging zenml model version update -s production ``` ### Conclusion ZenML's Model Control Plane provides robust features for managing ML models, including configuration, metadata logging, and versioning. For detailed exploration, refer to the [Model Management guide](../../how-to/model-management-metrics/model-control-plane/README.md). ================================================== === File: docs/book/user-guide/starter-guide/manage-artifacts.md === ### ZenML Artifact Management Overview ZenML automates the versioning and management of artifacts in machine learning workflows, ensuring reproducibility and traceability. This documentation covers key aspects of managing artifacts produced by ZenML pipelines, including naming, versioning, metadata, and consuming artifacts. #### Managing Artifacts 1. **Artifact Naming**: - Use the `Annotated` object to assign human-readable names to outputs. - Default naming pattern: `{pipeline_name}::{step_name}::output`. ```python from typing_extensions import Annotated import pandas as pd from sklearn.datasets import load_iris from zenml import pipeline, step @step def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: iris = load_iris(as_frame=True) return iris.get("frame") @pipeline def feature_engineering_pipeline(): training_data_loader() ``` 2. **Versioning Artifacts**: - Artifacts are automatically versioned (e.g., `iris_dataset` will have versions "1", "2", etc.). - Custom versions can be defined using `ArtifactConfig`. ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: ... ``` 3. **Adding Metadata and Tags**: - Metadata and tags can be added to artifacts for better organization. ```python from zenml import step, get_step_context, ArtifactConfig from typing_extensions import Annotated @step def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"])]: return "string" ``` #### Comparing Metadata Across Runs (Pro) - The ZenML Pro dashboard includes an Experiment Comparison tool for analyzing metadata across pipeline runs. - Two views available: **Table View** (structured comparison) and **Parallel Coordinates View** (relationship identification). #### Artifact Types - Specify artifact types for better filtering and visualization in the dashboard. ```python from zenml import ArtifactConfig, step from zenml.enums import ArtifactType @step def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: return MyCustomModel(...) ``` #### Consuming External Artifacts - Use `ExternalArtifact` to initialize artifacts from external sources. ```python import numpy as np from zenml import ExternalArtifact, pipeline, step @step def print_data(data: np.ndarray): print(data) @pipeline def printing_pipeline(): data = ExternalArtifact(value=np.array([0])) print_data(data=data) ``` #### Managing Artifacts Not Produced by ZenML - Artifacts can be created externally and registered in ZenML. ```python from zenml.client import Client, save_artifact model = ... prediction = model.predict([[1, 1, 1, 1]]) save_artifact(prediction, name="iris_predictions") ``` #### Logging Metadata for Artifacts - Associate metadata with artifacts for better understanding and tracking. ```python from zenml import step, log_artifact_metadata @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model ``` ### Example Code A complete example demonstrating artifact management: ```python from typing import Optional, Tuple from typing_extensions import Annotated import numpy as np from sklearn.base import ClassifierMixin from sklearn.datasets import load_digits from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata, save_artifact, load_artifact, Client @step def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset", tags=["digits"])]: digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline def model_finetuning_pipeline(dataset_version: Optional[str] = None, model_version: Optional[str] = None): client = Client() dataset = client.get_artifact_version(name_id_or_prefix="my_dataset", version=dataset_version) if dataset_version else versioned_data_loader_step() model = client.get_artifact_version(name_id_or_prefix="my_model", version=model_version) model_finetuner_step(model=model, dataset=dataset) def main(): untrained_model = SVC(gamma=0.001) save_artifact(untrained_model, name="my_model", version="1", tags=["SVC", "untrained"]) model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") latest_trained_model = load_artifact("my_model") old_dataset = load_artifact("my_dataset", version="1") latest_trained_model.predict(old_dataset[0]) if __name__ == "__main__": main() ``` This example illustrates the creation and management of datasets and models, including versioning and metadata logging. For more details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md === # ZenML Documentation Summary ## Overview ZenML facilitates the creation and management of modular, scalable machine learning (ML) pipelines by decoupling stages like data ingestion, preprocessing, and model evaluation. Each stage is represented as a **Step**, which can be integrated into an end-to-end **Pipeline**. ## Installation To get started, install ZenML: ```shell pip install "zenml[server]" zenml login --local # Launches the dashboard locally ``` ## Simple ML Pipeline Example A basic ML pipeline can be set up using ZenML. Below is an example that demonstrates loading data and training a model. ### Code Example ```python from zenml import pipeline, step @step def load_data() -> dict: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) if __name__ == "__main__": run = simple_ml_pipeline() ``` ### Running the Pipeline Execute the script with: ```bash $ python run.py ``` This will initiate the pipeline and display execution details in the terminal. ## Dashboard After execution, view results in the ZenML Dashboard by running: ```bash zenml login --local ``` Access the dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/) and log in with the username **"default"**. ## Steps and Artifacts Each function in the pipeline is a `step`, and they are connected by `artifacts`, which are the outputs of one step used as inputs to another. ZenML automatically tracks these artifacts and their configurations for reproducibility. ## Full ML Workflow Example To expand to a complete ML workflow, use the Iris dataset and train a Support Vector Classifier (SVC). ### Requirements Install necessary packages: ```bash pip install matplotlib zenml integration install sklearn -y ``` ### Data Loader with Multiple Outputs Define a data loader step: ```python from typing_extensions import Annotated, Tuple import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split import logging @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: logging.info("Loading iris...") iris = load_iris(as_frame=True) X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, random_state=42 ) return X_train, X_test, y_train, y_test ``` ### Parameterized Training Step Create a training step for the SVC: ```python from sklearn.base import ClassifierMixin from sklearn.svm import SVC @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc ``` ### Pipeline Definition Combine steps into a pipeline: ```python @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline(gamma=0.0015) ``` ### YAML Configuration Configure pipeline runs using a YAML file: ```python training_pipeline = training_pipeline.with_options( config_path='/local/path/to/config.yaml' ) training_pipeline() ``` Example YAML file: ```yaml parameters: gamma: 0.01 ``` ### Full Code Example The complete code for the workflow is as follows: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` This summary captures the essential technical details and steps for creating and managing ML pipelines using ZenML, ensuring clarity and conciseness. ================================================== === File: docs/book/user-guide/starter-guide/cache-previous-executions.md === ### Summary of ZenML Caching Documentation **Overview**: ZenML enhances machine learning pipeline development through caching, allowing for quicker iterations by reusing outputs from previous runs when inputs, parameters, or code remain unchanged. **Key Points**: - **Caching Behavior**: - Caching is enabled by default in ZenML. - Outputs are stored in the artifact store, allowing steps to be skipped if they haven't changed. - If no changes occur, ZenML will use cached outputs, saving time and resources. - To disable client-side caching, set the environment variable `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. - **Manual Caching Control**: - Caching does not automatically detect external changes. Use `enable_cache=False` for steps dependent on external inputs: ```python @step(enable_cache=False) def load_data_from_external_system(...): # This step will always run ``` - **Configuring Caching**: - **Pipeline Level**: Set caching in the `@pipeline` decorator: ```python @pipeline(enable_cache=False) def first_pipeline(...): """Pipeline with cache disabled""" ``` - **Dynamic Configuration**: Override caching settings at runtime: ```python first_pipeline = first_pipeline.with_options(enable_cache=False) ``` - **Step Level**: Control caching for individual steps: ```python @step(enable_cache=False) def import_data_from_api(...): """Import most up-to-date data from public API""" ``` **Code Example**: The following script demonstrates caching behavior in a ZenML pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() logger.info("\n\nFirst step cached, second not due to parameter change") training_pipeline(gamma=0.0001) svc_trainer = svc_trainer.with_options(enable_cache=False) logger.info("\n\nFirst step cached, second not due to settings") training_pipeline() logger.info("\n\nCaching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ``` This script illustrates how caching works in ZenML, including how to disable it at various levels. ================================================== === File: docs/book/user-guide/starter-guide/starter-project.md === ### Summary of ZenML Starter Project Documentation This documentation provides a guide for initiating a simple MLOps project using ZenML. Key components of an MLOps system covered include pipelines, artifacts, and models. #### Getting Started 1. **Create a Virtual Environment**: Start with a fresh environment without dependencies. 2. **Install Dependencies**: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 3. **Initialize Project with ZenML Templates**: ```bash mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults pip install -r requirements.txt ``` **Alternative Method**: Clone the MLOps starter example if the above does not work: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ``` #### Learning Outcomes By following the guide or the accompanying Jupyter notebook, you will execute three pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. - **Training Pipeline**: Trains a model using the preprocessed dataset. - **Batch Inference Pipeline**: Runs predictions on new data using the trained model. #### Conclusion and Next Steps This concludes the introductory chapter of your MLOps journey with ZenML. Experiment with ZenML to solidify your understanding, and when ready, proceed to the [production guide](../production-guide/) for advanced topics. ================================================== === File: docs/book/user-guide/starter-guide/README.md === # ZenML Starter Guide Summary The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools for managing machine learning operations. ## Key Topics Covered: - **Creating Your First ML Pipeline**: Instructions on building a basic ML pipeline. - **Understanding Caching Between Pipeline Steps**: Techniques for optimizing pipeline execution. - **Managing Data and Data Versioning**: Best practices for handling datasets and their versions. - **Tracking Your Machine Learning Models**: Methods for monitoring and managing ML models. ## Prerequisites: - A Python environment set up. - `virtualenv` installed for project isolation. By the end of the guide, users will complete a starter project, marking the beginning of their MLOps journey with ZenML. This guide serves as both an introduction to ZenML and a foundational resource for MLOps practices. ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === # Managing ZenML Pipeline Lifecycle with CI/CD ## Overview This documentation outlines how to manage the lifecycle of a ZenML pipeline using Continuous Integration (CI) and Continuous Delivery (CD) through GitHub Actions. It emphasizes the transition from local execution to a centralized workflow engine for automated testing and deployment. ## Setting Up CI/CD To implement CI/CD, follow these steps: 1. **Create an API Key in ZenML**: Use the command below to generate an API key for machine-to-machine connections: ```bash zenml service-account create github_action_api_key ``` This will return an API key that must be stored securely. 2. **Configure GitHub Secrets**: Store the generated `ZENML_API_KEY` in your GitHub repository secrets. This allows secure access to the API key during CI/CD operations. 3. **(Optional) Set Up Staging and Production Stacks**: You can configure different stacks for staging and production. This may involve using different data sources or configuration files for each environment. 4. **Trigger Pipeline on Pull Requests**: Set up a GitHub Action to run your pipeline automatically on pull requests. Use the following YAML configuration: ```yaml on: pull_request: branches: [ staging, main ] ``` 5. **Define Job Steps**: Here’s a simplified version of the job configuration: ```yaml jobs: run-staging-workflow: runs-on: run-zenml-pipeline env: ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} ZENML_STACK: stack_name ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} ``` 6. **Install Requirements and Run Pipeline**: Include steps to check out code, set up Python, install dependencies, connect to the ZenML server, set the active stack, and run the pipeline: ```yaml steps: - name: Check out repository code uses: actions/checkout@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install requirements run: pip3 install -r requirements.txt - name: Confirm ZenML client is connected run: zenml status - name: Set stack run: zenml stack set ${{ env.ZENML_STACK }} - name: Run pipeline run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} ``` 7. **(Optional) Comment Metrics on PR**: Configure the workflow to leave a report based on the pipeline results on the pull request. ## Additional Resources For a practical example, refer to the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/), which provides a template for automating CI/CD with ZenML. ================================================== === File: docs/book/user-guide/production-guide/remote-storage.md === ### Summary: Transitioning to Remote Artifact Storage in ZenML #### Overview ZenML allows users to transition from local artifact storage to remote storage, enhancing collaboration and scalability. Remote storage enables artifact accessibility from anywhere, crucial for team environments and managing larger datasets. #### Connecting Remote Storage When using remote storage, artifacts are stored centrally without changing the pipeline execution process. #### Provisioning Remote Artifact Stores ZenML supports various artifact store flavors. Below are instructions for major cloud providers: - **AWS (S3)** 1. Install AWS CLI. 2. Install ZenML S3 integration: ```shell zenml integration install s3 -y ``` 3. Register S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` - **GCP (GCS)** 1. Install Google Cloud CLI. 2. Install ZenML GCP integration: ```shell zenml integration install gcp -y ``` 3. Register GCS Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` - **Azure** 1. Install Azure CLI. 2. Install ZenML Azure integration: ```shell zenml integration install azure -y ``` 3. Register Azure Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` - **Other Providers** Remote artifact stores can be created using cloud-agnostic solutions like Minio or by implementing custom stack components. #### Configuring Permissions with Service Connectors Service connectors manage credentials for accessing cloud infrastructure. They provide temporary permissions to stack components. - **AWS Service Connector** ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` - **GCP Service Connector** ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` - **Azure Service Connector** ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` After creating a service connector, connect it to the artifact store: ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` #### Running a Pipeline on a Cloud Stack 1. Register a new stack: ```shell zenml stack register local_with_remote_storage -o default -a cloud_artifact_store ``` 2. Set the stack active: ```shell zenml stack set local_with_remote_storage ``` 3. Run the training pipeline: ```shell python run.py --training-pipeline ``` Artifacts will be stored in the remote location, accessible for future runs and by team members. Users can list artifact versions: ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` ### Conclusion Transitioning to remote storage in ZenML is a crucial step for building a collaborative MLOps workflow, allowing artifacts to be shared across teams and enhancing scalability. ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === # Summary of ZenML Documentation on Switching Infrastructure Backend ## Overview of Stacks - **Stack**: A configuration of tools and infrastructure for running ZenML pipelines. By default, pipelines run on the `default` stack. - **Separation of Code and Infrastructure**: ZenML allows users to switch environments without modifying code, enabling domain experts to work independently on code or infrastructure. ## Stack Management - **Active Stack**: The stack currently in use for running pipelines. Use `zenml stack describe` to view details of the active stack and `zenml stack list` to see all registered stacks. ### Stack Components 1. **Orchestrator**: Executes pipeline code, often as a Python thread. View orchestrators with `zenml orchestrator list`. 2. **Artifact Store**: Persists step outputs, which are not passed in memory. View artifact stores with `zenml artifact-store list`. 3. **Additional Components**: Include experiment trackers, model deployers, and container registries. ## Registering a Stack ### Create an Artifact Store ```bash zenml artifact-store register my_artifact_store --flavor=local ``` - **Command Breakdown**: - `artifact-store`: Top-level group for artifact stores. - `register`: Register a new component. - `my_artifact_store`: Unique name for the store. - `--flavor=local`: Specifies the implementation type. ### Create a New Stack ```bash zenml stack register a_new_local_stack -o default -a my_artifact_store ``` - **Command Breakdown**: - `stack`: CLI group for stack interactions. - `register`: Register a new stack. - `a_new_local_stack`: Unique name for the stack. - `-o` or `--orchestrator`: Specify orchestrator. - `-a` or `--artifact-store`: Specify artifact store. ## Switching Stacks - Use the ZenML VS Code extension to view and switch stacks easily. ## Running a Pipeline on the New Stack 1. Set the new stack as active: ```bash zenml stack set a_new_local_stack ``` 2. Run the pipeline: ```bash python run.py --training-pipeline ``` ## Important Commands - Export stack requirements: `zenml stack export-requirements ` - Describe a stack: `zenml stack describe ` - Describe an artifact store: `zenml artifact-store describe my_artifact_store` This summary captures the essential aspects of switching the infrastructure backend in ZenML, including stack management, component details, and commands for creating and using stacks. ================================================== === File: docs/book/user-guide/production-guide/configure-pipeline.md === ### Summary of ZenML Pipeline Configuration Documentation This documentation outlines how to configure a ZenML pipeline to add compute resources and manage dependencies using a YAML configuration file. #### Key Points: 1. **Pipeline Configuration**: - The pipeline is configured using a YAML file (`training_rf.yaml`), which specifies settings for Docker and model parameters. - The configuration is applied using the `with_options` method in the pipeline script. ```python pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") training_pipeline_configured = training_pipeline.with_options(**pipeline_args) training_pipeline_configured() ``` 2. **YAML Configuration Breakdown**: - **Docker Settings**: ```yaml settings: docker: required_integrations: - sklearn requirements: - pyarrow ``` This section specifies required libraries for the Docker image. - **Model Association**: ```yaml model: name: breast_cancer_classifier version: rf license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` Defines the model's metadata. - **Parameters**: ```yaml parameters: model_type: "rf" # Choose between rf/sgd ``` Specifies parameters expected by the pipeline. 3. **Scaling Compute Resources**: - To scale resources, add settings for memory and CPU in the YAML file: ```yaml settings: orchestrator: memory: 32 # in GB steps: model_trainer: settings: orchestrator: cpus: 8 ``` - For Microsoft Azure users using Kubernetes, the configuration differs slightly: ```yaml settings: resources: memory: "32GB" steps: model_trainer: settings: resources: memory: "8GB" ``` 4. **Running the Pipeline**: - Execute the pipeline with: ```bash python run.py --training-pipeline ``` 5. **Documentation Links**: - Additional resources and settings can be found in the ZenML documentation, including details on `ResourceSettings` and GPU attachment. This concise overview captures the essential aspects of configuring a ZenML pipeline, focusing on YAML settings for Docker, model association, parameters, and scaling compute resources. ================================================== === File: docs/book/user-guide/production-guide/deploying-zenml.md === ### Summary of Deploying ZenML Documentation **Overview**: Deploying ZenML is essential for transitioning from local development to a production environment, allowing team collaboration and centralized metadata management. #### Architecture - **Local Setup**: Initially, ZenML uses an SQLite database to store metadata (pipelines, models, artifacts). - **Production Setup**: Requires deploying a ZenML server externally for team collaboration. #### Deployment Options 1. **ZenML Pro Trial**: - Managed SaaS solution with one-click deployment. - Connect using: ```bash zenml login --pro ``` - Free trial available [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). - Offers additional features and a dashboard. 2. **Self-hosting**: - Open-source option to deploy ZenML on a Kubernetes cluster. - Create a cluster using cloud provider documentation: - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) #### Connecting to a Deployed ZenML - Use the CLI to connect your local ZenML client to the server: ```bash zenml login ``` - This command initiates a browser-based validation process. - Once connected, all metadata will be centrally tracked. - To revert to local use, execute: ```bash zenml logout ``` #### Additional Resources - For more deployment options and guides, visit: - [Deploying ZenML](../../getting-started/deploying-zenml/README.md) - Full how-to guides for various deployment methods (Docker, Hugging Face Spaces, Kubernetes). This summary retains critical technical information and key points while ensuring clarity and conciseness. ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === ### Summary of ZenML Git Repository Integration Documentation **Overview**: Connecting a Git repository to ZenML enhances MLOps project collaboration, optimizes Docker builds, and improves code management. **Pipeline Execution Flow**: 1. Trigger a pipeline run locally. 2. ZenML parses the `@pipeline` function. 3. Local client requests stack info from ZenML server. 4. If a Git repository is detected, it checks for existing Docker images based on the Git commit hash. 5. The orchestrator sets up the execution environment in the cloud. 6. Code is downloaded from the Git repository, using the existing Docker image. 7. Pipeline steps execute, storing artifacts in a cloud-based store. 8. Run status and metadata are reported back to the ZenML server. **Benefits**: Avoids redundant builds, enhances team collaboration, and ensures correct code versions are used for each run. ### Creating a GitHub Repository 1. Sign in to [GitHub](https://github.com/). 2. Click "+" and select "New repository." 3. Name the repository, set visibility, and optionally add a README or .gitignore. 4. Click "Create repository." **Push Local Code to GitHub**: ```sh git init git add . git commit -m "Initial commit" git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` *Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` accordingly.* ### Linking GitHub to ZenML 1. **Get a GitHub Personal Access Token (PAT)**: - Go to GitHub settings > Developer settings > Personal access tokens. - Generate a new token with `contents` read-only access for the specific repository. 2. **Install GitHub Integration and Register Repository**: ```sh zenml integration install github zenml code-repository register --type=github \ --url=https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git \ --owner=YOUR_USERNAME --repository=YOUR_REPOSITORY_NAME \ --token=YOUR_GITHUB_PERSONAL_ACCESS_TOKEN ``` *Fill in ``, `YOUR_USERNAME`, `YOUR_REPOSITORY_NAME`, and `YOUR_GITHUB_PERSONAL_ACCESS_TOKEN`.* ### Running the Training Pipeline ```python # First run builds the Docker image python run.py --training-pipeline # Subsequent runs skip Docker building python run.py --training-pipeline ``` For more details, refer to the [ZenML Git Integration documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/production-guide/end-to-end.md === # End-to-End MLOps Project with ZenML This documentation outlines the steps to create an end-to-end MLOps project using ZenML, integrating various advanced concepts. ## Key Concepts Covered - Deploying ZenML - Abstracting infrastructure with stacks - Connecting remote storage - Cloud orchestration - Configuring scalable pipelines - Connecting a Git repository ## Getting Started 1. **Set up a virtual environment** and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 2. **Create a project using ZenML templates**: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e zenml init --template e2e_batch --template-with-defaults pip install -r requirements.txt ``` **Alternative**: Clone the e2e template from ZenML examples: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/e2e pip install -r requirements.txt zenml init ``` ## Learning Outcomes The e2e project template demonstrates core ZenML concepts for supervised ML with batch predictions, building on the starter project. Users are encouraged to run pipelines on a remote cloud stack and a tracked Git repository to reinforce learned concepts. ## Conclusion This guide equips you with the knowledge to implement an end-to-end MLOps project using ZenML. For further learning, explore the advanced concepts in the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). Good luck with your MLOps journey! ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === # Orchestrate on the Cloud with ZenML This documentation covers transitioning MLOps pipelines from local execution to the cloud using ZenML, focusing on two key components: the orchestrator and the container registry. ## Key Components - **Orchestrator**: Manages workflow and execution of pipelines. - **Container Registry**: Stores Docker container images. These components, along with remote storage, form a basic cloud stack for running pipelines. ## Basic Cloud Stack Setup The recommended starting orchestrator is **Skypilot**, which provisions a VM on a public cloud. ZenML utilizes **Docker** to package code and dependencies into images that are pushed to the container registry. ### Sequence of Events When Running a Pipeline 1. User runs a pipeline on the client machine, executing `run.py`. 2. The client retrieves stack info from the server. 3. The client builds and pushes an image to the container registry. 4. The client creates a run in the orchestrator, provisioning a VM. 5. The orchestrator pulls the image from the container registry. 6. Artifacts are stored in the artifact store (cloud storage). 7. The pipeline reports status back to the ZenML server. ## Provisioning and Registering Components ### AWS Setup 1. Install integrations: ```shell zenml integration install aws skypilot_aws -y ``` 2. Register service connector: ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` 3. Register orchestrator: ```shell zenml orchestrator register cloud_orchestrator -f vm_aws zenml orchestrator connect cloud_orchestrator --connector cloud_connector ``` 4. Register container registry: ```shell zenml container-registry register cloud_container_registry -f aws --uri=.dkr.ecr..amazonaws.com zenml container-registry connect cloud_container_registry --connector cloud_connector ``` ### GCP Setup 1. Install integrations: ```shell zenml integration install gcp skypilot_gcp -y ``` 2. Register service connector: ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= ``` 3. Register orchestrator: ```shell zenml orchestrator register cloud_orchestrator -f vm_gcp zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register container registry: ```shell zenml container-registry register cloud_container_registry -f gcp --uri=gcr.io/ zenml container-registry connect cloud_container_registry --connector cloud_connector ``` ### Azure Setup Due to compatibility issues, Azure users should use the Kubernetes orchestrator: 1. Install integrations: ```shell zenml integration install azure kubernetes -y ``` 2. Register service connector: ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` 3. Register orchestrator: ```shell zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register container registry: ```shell zenml container-registry register cloud_container_registry -f azure --uri=.azurecr.io zenml container-registry connect cloud_container_registry --connector cloud_connector ``` ## Running a Pipeline After registering components, register a new stack: ```shell zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry ``` Set the stack active: ```shell zenml stack set minimal_cloud_stack ``` Run the training pipeline: ```shell python run.py --training-pipeline ``` The pipeline will build a Docker image, push it, and execute on the cloud VM, streaming logs back to the client. For further exploration, refer to the [Component Guide](../../component-guide/README.md) for various integrated components. ================================================== === File: docs/book/user-guide/production-guide/README.md === # Production Guide Summary The ZenML production guide is an advanced resource for MLOps Engineers, building on the Starter guide. It is designed for ML practitioners looking to implement proof of concepts in their workplaces. ## Key Focus Areas: - Transitioning from local pipeline execution to cloud production. - Topics covered include: - **Deploying ZenML**: Instructions for setting up ZenML in a production environment. - **Understanding Stacks**: Overview of ZenML stacks and their components. - **Connecting Remote Storage**: Guidelines for integrating cloud storage solutions. - **Orchestrating on the Cloud**: Best practices for managing cloud-based orchestration. - **Configuring the Pipeline for Scalability**: Techniques for scaling compute resources. - **Code Repository Configuration**: Steps to connect a code repository for version control. ## Prerequisites: - A Python environment with `virtualenv` installed. - A major cloud provider (AWS, GCP, Azure) selected, with respective CLIs installed and authorized. By following this guide, users will complete an end-to-end MLOps project, serving as a model for future implementations. ================================================== === File: docs/book/user-guide/llmops-guide/README.md === # ZenML LLMOps Guide Summary The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows, aimed at ML practitioners and MLOps engineers. Key topics include: - **RAG with ZenML**: Understanding and implementing Retrieval-Augmented Generation (RAG). - **Data Handling**: Ingestion, preprocessing, and generating embeddings. - **Vector Database**: Storing embeddings effectively. - **Inference Pipeline**: Building a basic RAG inference pipeline. - **Evaluation**: Metrics for retrieval and generation, including practical evaluation methods. - **Reranking**: Techniques for improving retrieval results and evaluating reranking performance. - **Finetuning**: Strategies for finetuning embeddings and LLMs, including synthetic data generation and using Sentence Transformers. - **Deployment**: Steps for deploying finetuned models. The guide emphasizes a practical application—a question answering system for ZenML—demonstrating the transition from a simple RAG pipeline to advanced techniques like embedding finetuning and document reranking. ### Prerequisites - Python environment with ZenML installed. - Familiarity with the concepts in the Starter and Production Guides. By the end of the guide, users will understand how to effectively leverage LLMs in MLOps workflows, enabling the creation of scalable and maintainable applications. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings-with-sentence-transformers.md === ### Summary: Finetuning Embeddings with Sentence Transformers This documentation outlines the process of finetuning embeddings using the Sentence Transformers library within a ZenML pipeline. #### Key Steps in the Pipeline: 1. **Data Loading**: - Load data from Hugging Face or Argilla by using the `--argilla` flag: ```bash python run.py --embeddings --argilla ``` 2. **Finetuning Process**: - **Model Loading**: Load the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) using Sentence Transformers with efficient training via Flash Attention 2. - **Loss Function**: Use `MatryoshkaLoss`, a wrapper around `MultipleNegativesRankingLoss`, allowing simultaneous training on different embedding dimensions. - **Dataset Preparation**: Load the training dataset from a specified path using Hugging Face's `load_dataset` function. - **Evaluator**: Create an evaluator with `get_evaluator` to assess model performance during training. - **Training Arguments**: Set hyperparameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. - **Trainer Initialization**: Initialize `SentenceTransformerTrainer` with the model, training arguments, dataset, and loss function, then call `trainer.train()` to start training. - **Model Saving**: Save the finetuned model to Hugging Face Hub with `trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED)`. - **Metadata Logging**: Log training metadata including parameters and hardware details. - **Model Rehydration**: Save and reload the trained model to handle materialization errors. #### Simplified Code Snippet: ```python # Load the base model model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) # Define the loss function train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) # Prepare the training dataset train_dataset = load_dataset("json", data_files=train_dataset_path) # Set up the training arguments args = SentenceTransformerTrainingArguments(...) # Create the trainer trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) # Start training trainer.train() # Save the finetuned model trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` The finetuning process enhances model performance across various embedding sizes and ensures the model is versioned and tracked within ZenML for observability. After training, the pipeline evaluates and visualizes the results of both base and finetuned embeddings. For further details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-generation.md === ### Summary of Synthetic Data Generation with Distilabel This documentation outlines the process of generating synthetic data using the `distilabel` library to fine-tune embeddings for a dataset of technical documentation. It leverages a previously created dataset from Hugging Face and employs LLMs to automate question generation for each content chunk. #### Key Components: 1. **Dataset Overview**: - The dataset consists of `page_content` and source URLs. - The goal is to pair `page_content` with generated questions. 2. **Pipeline Overview**: - Load the Hugging Face dataset. - Use `distilabel` to generate synthetic data. - Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. 3. **Synthetic Data Generation**: - `distilabel` allows scalable knowledge distillation from LLMs. - The pipeline setup includes: - Loading data from Hugging Face. - Generating sentence pairs (queries) using `GenerateSentencePair`. - The LLM used is `gpt-4o`, but other models can be utilized. #### Code Snippet for Synthetic Query Generation: ```python import os from typing import Annotated, Tuple import distilabel from datasets import Dataset from distilabel.llms import OpenAILLM from distilabel.steps import LoadDataFromHub from distilabel.steps.tasks import GenerateSentencePair from zenml import step synthetic_generation_context = "The text is a chunk from technical documentation of ZenML." @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) with distilabel.pipeline.Pipeline(name="generate_embedding_queries") as pipeline: load_dataset = LoadDataFromHub(output_mappings={"page_content": "anchor"}) generate_sentence_pair = GenerateSentencePair(triplet=True, action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context) load_dataset >> generate_sentence_pair train_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "train"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) test_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "test"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` 4. **Data Annotation with Argilla**: - After generating synthetic data, it is pushed to Argilla for inspection. - Additional metadata includes: - `parent_section`, `token_count`, and cosine similarities between query types. - The embeddings for the anchor column are generated using a specified model. #### Code Snippet for Formatting Data: ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") def get_embeddings(batch_column): return [vector.tolist() for vector in model.encode(batch_column)] batch["anchor-vector"] = get_embeddings(batch["anchor"]) batch["question-vector"] = get_embeddings(batch["anchor"]) batch["positive-vector"] = get_embeddings(batch["positive"]) batch["negative-vector"] = get_embeddings(batch["negative"]) def get_similarities(a, b): return [cosine_similarity([pos_vec], [neg_vec])[0][0] for pos_vec, neg_vec in zip(a, b)] batch["similarity-positive-negative"] = get_similarities(batch["positive-vector"], batch["negative-vector"]) batch["similarity-anchor-positive"] = get_similarities(batch["anchor-vector"], batch["positive-vector"]) batch["similarity-anchor-negative"] = get_similarities(batch["anchor-vector"], batch["negative-vector"]) return batch ``` 5. **Next Steps**: - After data inspection and potential cleaning, the focus will shift to fine-tuning the embeddings using the generated dataset, assuming quality is adequate. This summary encapsulates the essential steps and code snippets for generating synthetic data with `distilabel`, ensuring that critical information is retained for understanding and implementation. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetuned-embeddings.md === ### Summary of Documentation on Evaluating Finetuned Embeddings This documentation outlines the process of evaluating finetuned embeddings and comparing them to original base embeddings using ZenML. The evaluation utilizes the same MatryoshkaLoss function and involves the following key steps: 1. **Model Evaluation Function**: - The `evaluate_model` function takes a dataset and a model, returning evaluation results as a dictionary of metrics. - The `evaluate_base_model` function initializes the base model using `SentenceTransformer`, evaluates it on the dataset, and logs the results as model metadata. ```python from zenml import log_model_metadata, step def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: evaluator = get_evaluator(dataset=dataset, model=model) return evaluator(model) @step def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") results = evaluate_model(dataset=dataset, model=model) base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} log_model_metadata(metadata={"base_model_eval": base_model_eval}) return results ``` 2. **Logging and Versioning**: - Evaluation results are logged in ZenML and versioned for tracking. The results can be inspected in the Model Control Plane. 3. **Visualization**: - Results can be visualized using `matplotlib`, allowing for easy comparison between base and finetuned model evaluations. The visualization shows improvements in recall across dimensions. 4. **Model Control Plane**: - The Model Control Plane serves as a unified interface to inspect artifacts, models, metadata, and pipeline runs. It provides insights into the latest versions and evaluation metrics. 5. **Next Steps**: - After evaluating the embeddings, users can integrate them into the original RAG pipeline and perform further evaluations. The documentation also references upcoming sections on LLM finetuning and deployment, with links to relevant projects and guides. This concise overview captures the essential technical details and processes involved in evaluating finetuned embeddings using ZenML, ensuring that critical information is retained for further exploration or implementation. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === ### Summary of Documentation on Finetuning Embeddings **Objective**: Enhance retrieval performance by finetuning embeddings on custom synthetic data. **Context**: This documentation is part of an older version of ZenML. For the latest version, refer to [ZenML documentation](https://docs.zenml.io). **Overview**: The guide focuses on optimizing embedding models using synthetic data generation and human feedback. While off-the-shelf embeddings provide a baseline, finetuning on domain-specific data can significantly improve performance in retrieval-augmented generation (RAG) pipelines. **RAG Pipeline**: The process involves retrieving relevant documents from a vector database and generating responses using a language model. Finetuning embeddings on a dataset of technical documentation enhances the retrieval step and overall pipeline performance. **Steps Involved**: 1. **Generate Synthetic Data**: Use `distilabel` for synthetic data generation. 2. **Finetune Embeddings**: Utilize Sentence Transformers for embedding finetuning. 3. **Evaluate Finetuned Embeddings**: Leverage ZenML's model control plane for systematic evaluation. **Libraries Used**: - **`distilabel`**: Generates synthetic data and provides AI feedback, focusing on scalable knowledge distillation from LLMs. - **`argilla`**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. Both libraries can function independently but are more effective when used together within ZenML pipelines. **Code and Resources**: For practical implementation, refer to the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) for complete code examples. The finetuning process can be executed locally or on cloud compute. **Note**: This section is designed to provide a comprehensive understanding of the finetuning process while maintaining technical accuracy and clarity. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === ### Summary of LLM Finetuning Documentation **Overview**: This documentation focuses on finetuning Large Language Models (LLMs) for specific tasks or to enhance performance and cost-effectiveness. It is part of the ZenML framework and discusses scenarios where finetuning is beneficial, especially in conjunction with Retrieval-Augmented Generation (RAG) systems. **Key Points**: - **Purpose of Finetuning**: - Improve response generation in specific formats. - Enhance understanding of domain-specific terminology. - Reduce prompt length for consistent outputs. - Follow specific patterns or protocols efficiently. - Optimize latency by minimizing context window size. - **Guide Structure**: The guide includes the following sections: - [Finetuning in 100 lines of code](finetuning-100-loc.md) - [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) - [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) - [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) - [Evaluation for finetuning](evaluation-for-finetuning.md) - [Deploying finetuned models](deploying-finetuned-models.md) - [Next steps](next-steps.md) - **Finetuning Process**: The steps to finetune an LLM are straightforward. Understanding when to finetune, evaluating performance, and selecting appropriate data are crucial. - **Example Repository**: For practical implementation, refer to the [`llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning), which contains the full code. This code can be executed locally (with a GPU) or on cloud platforms. **Note**: This documentation is an older version; for the latest updates, visit the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md === ### Summary: Fine-tuning an LLM with ZenML This documentation outlines a concise implementation of a fine-tuning pipeline for a language model (LLM) in 100 lines of code, specifically using the TinyLlama model. Key components include: #### Key Steps: 1. **Dataset Preparation**: A small instruction-tuning dataset is created with input-output pairs: - Instructions and corresponding responses about "ZenML World" entities. 2. **Data Formatting and Tokenization**: - Each example is formatted into a structured prompt: ``` ### Instruction: [user query] ### Response: [desired response] ``` - Tokenization is performed with a maximum length of 128 tokens. 3. **Model Selection**: - The base model used is `TinyLlama/TinyLlama-1.1B-Chat-v1.0`, chosen for its small size and pre-training for chat tasks. 4. **Training Configuration**: - Training parameters include: - 3 epochs - Batch size of 1 with gradient accumulation of 4 - Learning rate of 2e-4 - Mixed precision (bfloat16) - Logging every 10 steps 5. **Response Generation**: - The fine-tuned model generates responses using a temperature of 0.7 and a maximum length of 128 tokens. #### Code Snippet: ```python import os from typing import List, Dict from datasets import Dataset from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer, DataCollatorForLanguageModeling import torch def prepare_dataset() -> Dataset: data = [ {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity..."}, {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures..."}, {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees..."} ] return Dataset.from_list(data) def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: formatted_text = f"### Instruction: {example['instruction']}\n### Response: {example['response']}" return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"): tokenizer = AutoTokenizer.from_pretrained(base_model) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16, device_map="auto") dataset = prepare_dataset() tokenized_dataset = dataset.map(lambda x: tokenize_data(x, tokenizer), remove_columns=dataset.column_names) training_args = TrainingArguments( output_dir="./zenml-world-model", num_train_epochs=3, per_device_train_batch_size=1, gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=10, save_total_limit=2 ) trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)) trainer.train() return model, tokenizer def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer) -> str: inputs = tokenizer(f"### Instruction: {prompt}\n### Response:", return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=128, temperature=0.7) return tokenizer.decode(outputs[0], skip_special_tokens=True) if __name__ == "__main__": model, tokenizer = fine_tune_model() test_prompts = ["What is a Zenbot?", "Describe the Cosmic Butterflies.", "Tell me about an unknown creature."] for prompt in test_prompts: print(f"\nPrompt: {prompt}\nResponse: {generate_response(prompt, model, tokenizer)}") ``` #### Limitations: - The dataset is small, which may lead to poor response quality. - Larger models could yield better results but require more resources. - Minimal training epochs and simple learning rates are used for demonstration. #### Next Steps: The documentation suggests exploring more robust fine-tuning techniques, including larger datasets, evaluation metrics, and model deployment strategies. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune-llms.md === # Finetuning LLMs: When and Why ## Overview This guide provides a practical overview of finetuning large language models (LLMs) on custom data. It emphasizes that finetuning is not a universal solution and may introduce technical debt. Alternative uses for LLMs beyond chatbots are highlighted, and finetuning should be considered after exploring other approaches. ## When to Finetune an LLM Finetuning is beneficial in specific scenarios: 1. **Domain-Specific Knowledge**: Necessary for deep understanding in specialized fields (e.g., medical, legal). 2. **Consistent Style/Format**: Required for outputs in specific styles, such as code generation. 3. **Improved Task Accuracy**: Needed for tasks critical to your application. 4. **Handling Proprietary Information**: Essential for confidential data that cannot be sent externally. 5. **Custom Instructions**: Repeated prompts can be integrated into the model to save on latency and costs. 6. **Improved Efficiency**: Can enhance performance with shorter prompts. ### Decision Flowchart ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering sufficient?} B -->|Yes| C[Use prompt engineering] B -->|No| D{Is it primarily a knowledge retrieval problem?} D -->|Yes| E{Is real-time data access needed?} E -->|Yes| F[Use RAG] E -->|No| G{Is data volume very large?} G -->|Yes| H[Consider hybrid: RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow, specific task?} I -->|Yes| J{Can a smaller specialized model handle it?} J -->|Yes| K[Use smaller model] J -->|No| L[Consider finetuning] I -->|No| M{Do you need consistent style or format?} M -->|Yes| L M -->|No| N{Is deep domain expertise required?} N -->|Yes| O{Is the domain well-represented in base model?} O -->|Yes| P[Use base model] O -->|No| L N -->|No| Q{Is data proprietary/sensitive?} Q -->|Yes| R{Can you use API solutions?} R -->|Yes| S[Use API solutions] R -->|No| L Q -->|No| S ``` ## Alternatives to Finetuning Before opting for finetuning, consider: - **Prompt Engineering**: Often sufficient for good results. - **Retrieval-Augmented Generation (RAG)**: Effective for specific knowledge bases. - **Smaller Task-Specific Models**: May outperform finetuned LLMs for narrow tasks. - **API-Based Solutions**: Simpler and cost-effective if sensitive data handling is unnecessary. ## Conclusion Finetuning LLMs can be powerful but should be approached carefully. Start with simpler solutions and consider finetuning only after exhausting alternatives and identifying clear benefits. The next section will cover practical considerations for finetuning LLMs. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === ### Summary: Getting Started with Finetuning LLMs This guide provides a high-level overview of the initial steps for finetuning large language models (LLMs), focusing on selecting a use case, gathering data, choosing a base model, and evaluating success. #### Quick Assessment Questions Before starting, consider: 1. **Define Success**: Use measurable metrics (e.g., "95% accuracy in extracting order IDs"). 2. **Data Readiness**: Ensure data is prepared (e.g., "1000 labeled support tickets"). 3. **Task Consistency**: Aim for specific tasks (e.g., "Convert email to 5 specific fields"). 4. **Human Verification**: Ensure correctness can be verified (e.g., "Check if extracted date matches document"). #### Picking a Use Case Choose a small, manageable task that cannot be easily solved by non-LLM methods. For example, "triage customer support queries" is more specific than "answer all customer support emails." Ensure you can quickly evaluate the effectiveness of the approach. #### Picking Data Select data that closely aligns with your use case to minimize the need for extensive annotation. Aim for hundreds to thousands of examples. **Good Use Cases**: - **Structured Data Extraction**: Extracting order details from emails (500-1000 annotated emails). - **Domain-Specific Classification**: Categorizing support tickets (1000+ labeled examples). - **Standardized Response Generation**: Generating responses from documentation (500+ pairs). **Challenging Use Cases**: - **Open-ended Chat**: Hard to measure success; consider alternative methods. - **Creative Writing**: Subjective quality; focus on specific formats. #### Success Indicators Evaluate your use case using indicators: - **Task Scope**: Specific tasks are better than vague ones. - **Output Format**: Structured outputs are preferable. - **Data Availability**: Ensure sufficient examples exist. - **Evaluation Method**: Use clear metrics rather than subjective feedback. - **Business Impact**: Define tangible benefits. #### Picking a Base Model Choose a model based on your task requirements: - **Llama 3.1 8B**: Best for structured data extraction and classification (16GB GPU RAM). - **Llama 3.1 70B**: Suitable for complex reasoning (80GB GPU RAM). - **Mistral 7B**: Good for general text generation (16GB GPU RAM). - **Phi-2**: Ideal for lightweight tasks and rapid prototyping (8GB GPU RAM). #### Evaluation of Success Define clear metrics for success, especially for structured data extraction. Metrics may include: - Accuracy of extracted fields. - Precision and recall for specific field types. - Processing time per document. #### Next Steps With a clear understanding of scoping, data selection, and evaluation, proceed to practical implementation in the next section, which covers finetuning using the Accelerate library. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === # Summary of LLM Finetuning Evaluations Documentation ## Overview Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help ensure models behave as expected, catch issues early, and track progress over time. An incremental approach to building evaluation sets is recommended to avoid paralysis and facilitate early implementation. ## Motivation and Benefits Key motivations for thorough evals include: 1. **Prevent Regressions**: Ensure new changes do not harm existing functionality. 2. **Track Improvements**: Quantify and visualize model enhancements. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. A robust evaluation strategy leads to more reliable and performant finetuned LLMs. ## Types of Evaluations While generic evaluation frameworks are common, custom evaluations tailored to specific use cases are also important. Custom evals can be categorized into: 1. **Success Modes**: Focus on desired outputs (e.g., correct formatting, appropriate responses). 2. **Failure Modes**: Target undesired outputs (e.g., hallucinations, incorrect formats). ### Example Code for Custom Evals ```python from my_library import query_llm good_responses = { "what are the best salads available at the food court?": ["caesar", "italian"], "how late is the shopping center open until?": ["10pm", "22:00", "ten"] } for question, answers in good_responses.items(): assert any(answer in query_llm(question) for answer in answers) bad_responses = { "who is the manager of the shopping center?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): assert not any(answer in query_llm(question) for answer in answers) ``` ## Generalized Evals and Frameworks Generalized evals provide structured evaluation approaches, including: - Organizing evals - Standardized metrics - Insights into model performance Examples of frameworks include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) - [langcheck](https://github.com/citadel-ai/langcheck) - [nervaluate](https://github.com/MantisAI/nervaluate) Integrating these frameworks into pipelines, such as in the `llm-lora-finetuning` project, is straightforward. ## Data and Tracking Regular analysis of inference data is crucial for identifying patterns and areas for improvement. Implement comprehensive logging early on to track model behavior and performance. Recommended tools for data collection and analysis include: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) Creating simple dashboards to visualize core metrics can help monitor progress and assess the impact of changes. Prioritize simplicity over perfection in initial implementations. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/next-steps.md === # Next Steps After iterating on your finetuned model, consider the following key areas: - **Model Improvement**: Identify what enhances or detracts from model performance. - **Model Size Limits**: Determine the smallest viable model size. - **Process Alignment**: Ensure iteration time aligns with company processes and hardware limitations. - **Business Use Case**: Confirm the model effectively addresses the intended business problem. Next steps may involve: - **Scaling**: Addressing increased user demand or real-time requirements. - **Accuracy**: Fine-tuning larger models to meet critical accuracy needs. - **Production Integration**: Incorporating monitoring, logging, and evaluation into your business systems. While it may be tempting to switch to larger models, focus on improving your data quality first, especially if starting with limited examples. Consider enhancing your dataset through a flywheel approach or synthetic data generation before upgrading your model. ## Resources Recommended resources for LLM finetuning: - **[Mastering LLMs Course](https://parlance-labs.com/education/)**: Video course by Hamel Husain and Dan Becker. - **[Phil Schmid's Blog](https://www.philschmid.de/)**: Offers worked examples of LLM finetuning. - **[Sam Witteveen's YouTube Channel](https://www.youtube.com/@samwitteveenai)**: Covers topics from finetuning to prompt engineering with practical examples. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/deploying-finetuned-models.md === # Deployment Options for Finetuned LLMs Deploying a finetuned LLM is essential for integrating your model into real-world applications. This process requires careful planning to ensure performance, reliability, and cost-effectiveness. ## Deployment Considerations Key factors influencing deployment include: - **Resource Requirements**: LLMs demand significant RAM and processing power. Choose hardware that balances performance and cost based on your use case. - **Real-Time Needs**: Consider latency, failover scenarios, and load testing to prepare for user demand. - **Streaming vs. Non-Streaming**: Each approach has trade-offs regarding latency and resource usage. - **Optimization Techniques**: Methods like quantization can reduce resource usage but may affect performance, necessitating rigorous evaluation. ## Deployment Options and Trade-offs 1. **Roll Your Own**: Set up and manage your infrastructure for maximum control, typically using Docker (e.g., FastAPI). 2. **Serverless Options**: Scalable and cost-efficient, but may suffer from cold start latency. 3. **Always-On Options**: Constantly running models minimize latency but incur higher costs. 4. **Fully Managed Solutions**: Simplify deployment but may offer less flexibility and higher costs. Consider your team's expertise, budget, expected load, and specific requirements when selecting an option. ## Deployment with vLLM and ZenML [vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML provides a [vLLM integration](../../../component-guide/model-deployers/vllm.md) for easy deployment. ### Code Example ```python from zenml import pipeline from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> VLLMDeploymentService: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` The `model` argument can be a local path or a Hugging Face Hub ID, deploying the model locally for batch inference via an OpenAI-compatible API. ## Cloud-Specific Deployment Options - **AWS**: Use Amazon SageMaker for managed LLM deployment, AWS Lambda with API Gateway for serverless, or ECS/EKS with Fargate for more control. - **GCP**: Google Cloud AI Platform offers managed services similar to SageMaker, while Cloud Run provides a serverless option. GKE is suitable for containerized models. ## Architectures for Real-Time Engagement To engage customers in real-time, consider: - **Load Balancing**: Deploy multiple instances behind a load balancer with auto-scaling. - **Caching**: Use Redis to store frequent responses and reduce model load. - **Asynchronous Processing**: Implement message queues (e.g., SQS, Pub/Sub) for complex queries. - **Edge Computing**: Utilize services like AWS Lambda@Edge for reduced latency. ## Reducing Latency and Increasing Throughput Optimize for low latency and high throughput by: - **Model Optimization**: Use quantization and distillation to reduce model size and inference time. - **Hardware Acceleration**: Leverage GPU instances for faster processing. - **Request Batching**: Process multiple inputs in one forward pass. - **Monitoring and Profiling**: Continuously measure and optimize your inference pipeline. ## Monitoring and Maintenance Post-deployment, focus on: 1. **Evaluation Failures**: Regularly assess model performance. 2. **Latency Metrics**: Monitor response times. 3. **Load Patterns**: Analyze user interactions for scaling and optimization. 4. **Data Analysis**: Review inputs/outputs for trends and biases. Ensure compliance with privacy regulations in your logging practices. By implementing these strategies, you can maintain optimal performance for your finetuned LLM. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md === # Finetuning an LLM with Accelerate and PEFT This documentation outlines the process of finetuning a language model (LLM) using the Viggo dataset, which contains over 5,000 pairs of structured meaning representations and their corresponding natural language descriptions for video game dialogues. ## Finetuning Pipeline The finetuning pipeline consists of the following steps: 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model before finetuning. 4. **evaluate_finetuned**: Evaluate the finetuned model. 5. **promote**: Promote the best model to "staging" in the Model Control Plane. For initial experiments, it is recommended to start with smaller models (e.g., Llama 3.1 family at ~8B parameters) to facilitate quick iterations. ## Implementation Details The `prepare_data` step loads data from the Hugging Face hub and tokenizes it. Care should be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs is advised. Finetuning utilizes the `accelerate` library for multi-GPU support. The core finetuning code is as follows: ```python model = load_base_model(base_model_id, use_accelerate=use_accelerate) trainer = transformers.Trainer( model=model, train_dataset=tokenized_train_dataset, eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, learning_rate=lr, logging_dir="./logs", evaluation_strategy="steps", do_eval=True, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), callbacks=[ZenMLCallback(accelerator=accelerator)], ) ``` ### Evaluation Metrics The evaluation uses the `evaluate` library to compute ROUGE scores: - **ROUGE-N**: n-gram overlap. - **ROUGE-L**: Longest Common Subsequence. - **ROUGE-W**: Weighted Longest Common Subsequence. - **ROUGE-S**: Skip-bigram statistics. These metrics help assess the quality of generated text. ## Using the ZenML Accelerate Decorator ZenML provides the `@run_with_accelerate` decorator for easier distributed training setup: ```python from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True) @step def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id: str, output_dir: str): model = load_base_model(base_model_id, use_accelerate=True) trainer = transformers.Trainer( # ... trainer setup as shown above ) trainer.train() return trainer.model ``` ### Docker Configuration Ensure your Docker environment is configured with CUDA support and necessary dependencies: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def finetuning_pipeline(...): # Your pipeline steps here ``` ## Dataset Iteration Careful attention to input data is crucial. Poorly formatted data can lead to degraded model performance. Regular inspection of data at all stages is recommended. Consider augmenting or synthetically generating data if needed. As you progress, focus on evaluations and optimal parameters to measure model performance. Consider how to effectively serve the model and integrate it into existing architectures. Strive for smaller models that meet your use case requirements, as they often yield better outcomes. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md === ### Summary: Ingesting and Preprocessing Data for RAG Pipelines with ZenML This documentation outlines the process of ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. #### Data Ingestion 1. **Purpose**: The initial step involves ingesting data (documents and metadata) for training retriever and generator models. 2. **Integration**: ZenML integrates with various tools for managing data ingestion, including downloading, preprocessing, and indexing documents. 3. **URL Scraping**: A ZenML step can be created to scrape relevant URLs from ZenML documentation: ```python from typing import List from typing_extensions import Annotated from zenml import log_artifact_metadata, step from steps.url_scraping_utils import get_all_pages @step def url_scraper(docs_url: str = "https://docs.zenml.io") -> Annotated[List[str], "urls"]: docs_urls = get_all_pages(docs_url) log_artifact_metadata({"count": len(docs_urls)}) return docs_urls ``` - The `get_all_pages` function retrieves unique URLs from the documentation, focusing on the latest releases. 4. **Document Loading**: The `unstructured` library is used to load and parse HTML pages: ```python from typing import List from unstructured.partition.html import partition_html from zenml import step @step def web_url_loader(urls: List[str]) -> List[str]: document_texts = [] for url in urls: elements = partition_html(url=url) document_texts.append("\n\n".join(map(str, elements))) return document_texts ``` #### Data Preprocessing 1. **Chunking Strategy**: After loading documents, they need to be split into smaller chunks for efficient processing. The chunk size is critical for balancing retrieval effectiveness and LLM processing speed. ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: try: log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) return split_documents(documents, chunk_size=500, chunk_overlap=50) except Exception as e: logger.error(f"Error in preprocess_documents: {e}") raise ``` - The example uses a chunk size of 500 with a 50-character overlap to ensure important information is retained across chunks. #### Additional Considerations - Depending on the data structure, chunk sizes may vary. Larger chunks may be necessary for complex concepts, while smaller chunks may suit conversational data. - Further preprocessing may include text cleaning, handling code snippets, and metadata extraction. For complete code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and the specific [steps code](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md === # Simple RAG Inference Summary This documentation outlines the process of using RAG (Retrieval-Augmented Generation) components to generate responses based on indexed documents without requiring external libraries beyond the LLM interface and index store. ## Inference Query To run a query against the index store, use the following command: ```bash python run.py --rag-query "how do I use a custom materializer inside my own zenml steps? i.e. how do I set it? inside the @step decorator?" --model=gpt4 ``` ## Inference Pipeline Code The inference pipeline consists of the following key function: ```python def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features, and its use cases. You respond in a concise, technically credible tone. You ONLY use the context from the ZenML documentation to provide relevant answers. If you are unsure or don't know, just say so.""" messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": f"```{input}```"}, {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, ] return get_completion_from_messages(messages, model=model) ``` ### Document Retrieval The function `get_topn_similar_docs` retrieves the most similar documents based on the query embedding: ```python def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: embedding_array = np.array(query_embedding) register_vector(conn) cur = conn.cursor() cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) return cur.fetchall() ``` This function utilizes the `pgvector` PostgreSQL plugin to efficiently order documents by similarity. ### Generating Responses The `get_completion_from_messages` function generates a response using the specified LLM: ```python def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): model = MODEL_NAME_MAP.get(model, model) completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) return completion_response.choices[0].message.content ``` `litellm` serves as a universal interface for various LLMs, allowing flexibility in model selection. ## Conclusion This basic RAG inference pipeline retrieves relevant text chunks based on a query and generates responses using the indexed documents. Future sections will discuss improving retrieval by fine-tuning embeddings for better performance with diverse document sets. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [llm_utils.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database.md === ### Summary of Storing Embeddings in a Vector Database This documentation outlines the process of storing embeddings in a vector database, specifically PostgreSQL, for efficient retrieval based on similarity to queries. #### Key Points: - **Purpose**: Store embeddings to avoid regenerating them for every document retrieval. - **Database Choice**: PostgreSQL is recommended due to its scalability and efficiency for high-dimensional vector storage. Other vector databases can also be used. - **Setup**: Instructions for setting up PostgreSQL using Supabase are available in the ZenML repository. #### Code Overview: The following Python code demonstrates how to create and populate an embeddings table in PostgreSQL using the `psycopg2` package: ```python from zenml import step @step def index_generator(documents: List[Document]) -> None: try: conn = get_db_conn() with conn.cursor() as cur: cur.execute("CREATE EXTENSION IF NOT EXISTS vector") conn.commit() cur.execute(""" CREATE TABLE IF NOT EXISTS embeddings ( id SERIAL PRIMARY KEY, content TEXT, token_count INTEGER, embedding VECTOR({EMBEDDING_DIMENSIONALITY}), filename TEXT, parent_section TEXT, url TEXT ); """) conn.commit() for doc in documents: if cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)).fetchone()[0] == 0: cur.execute(""" INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)""", (doc.page_content, doc.token_count, doc.embedding.tolist(), doc.filename, doc.parent_section, doc.url)) conn.commit() num_records = cur.execute("SELECT COUNT(*) FROM embeddings;").fetchone()[0] num_lists = max(num_records / 1000, 10) if num_records <= 1000000 else math.sqrt(num_records) cur.execute(f"CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists});") conn.commit() except Exception as e: logger.error(f"Error in index_generator: {e}") raise finally: if conn: conn.close() ``` #### Functionality: - Connects to the database and creates the `vector` extension. - Creates an `embeddings` table if it does not exist. - Inserts new embeddings only if they are not already present. - Calculates index parameters and creates an index using the `ivfflat` method for cosine similarity. #### Considerations: - The decision to update embeddings depends on data change frequency. - Running this step on a GPU-enabled machine may improve performance for larger datasets. - The index is optimized for similarity search, allowing for efficient retrieval of relevant documents based on queries. For full code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md === ### Summary of RAG Pipeline Implementation This documentation outlines a simple implementation of a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: 1. **Data Loading**: Uses a fictional dataset about "ZenML World" as the corpus. 2. **Text Processing**: Splits text into chunks and tokenizes it (converts to words). 3. **Query Handling**: Accepts a user query and retrieves the most relevant text chunks. 4. **Answer Generation**: Utilizes OpenAI's GPT-3.5 model to generate answers based on the retrieved chunks. #### Key Functions - **`preprocess_text(text)`**: Normalizes the text by converting to lowercase, removing punctuation, and trimming whitespace. - **`tokenize(text)`**: Tokenizes the preprocessed text into words. - **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - Tokenizes the query. - Computes Jaccard similarity between the query and each chunk in the corpus. - Returns the top N relevant chunks based on similarity. - **`answer_question(query, corpus, top_n=2)`**: - Retrieves relevant chunks using `retrieve_relevant_chunks`. - Constructs a context string from the relevant chunks. - Uses OpenAI's API to generate an answer based on the context. #### Example Corpus ```python corpus = [ "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", "Telepathic Treants, ancient sentient trees, communicate through the quantum neural network...", # Additional sentences... ] ``` #### Example Queries ```python question1 = "What are Plasma Phoenixes?" answer1 = answer_question(question1, corpus) question2 = "What kinds of creatures live on the prismatic shores of ZenML World?" answer2 = answer_question(question2, corpus) irrelevant_question_3 = "What is the capital of Panglossia?" answer3 = answer_question(irrelevant_question_3, corpus) ``` #### Output The output provides answers based on the relevant context retrieved from the corpus. If a question is not covered by the corpus, it returns a default response indicating insufficient information. #### Technical Notes - The similarity check uses the Jaccard coefficient, which is a basic method for measuring text similarity. - This implementation is not optimized for performance or scalability; it serves as an illustrative example for understanding the RAG pipeline's components. For more advanced implementations, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md === ### Summary: Generating Embeddings for Retrieval This documentation outlines the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data, allowing for improved retrieval of relevant information based on similarity rather than just keyword matching. #### Key Points: - **Embeddings**: High-dimensional vector representations of data that facilitate semantic understanding. They are generated using models from the `sentence-transformers` library, which provides pre-trained models for encoding text. - **Purpose**: To quickly identify relevant data chunks during inference, improving the accuracy and relevance of responses to user queries. - **Model Used**: The `sentence-transformers/all-MiniLM-L12-v2` model is employed, producing embeddings with a dimensionality of 384. Smaller models can be used for speed, while larger models may enhance retrieval capabilities. - **Dimensionality Reduction**: Techniques like UMAP and t-SNE can visualize embeddings in 2D, helping to identify patterns and relationships in the data. #### Code Example for Generating Embeddings: ```python from typing import Annotated, List import numpy as np from sentence_transformers import SentenceTransformer from structures import Document from zenml import ArtifactConfig, log_artifact_metadata, step @step def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Document], ArtifactConfig(name="documents_with_embeddings")]: model = SentenceTransformer("sentence-transformers/all-MiniLM-L12-v2") log_artifact_metadata(artifact_name="embeddings", metadata={"embedding_type": "sentence-transformers/all-MiniLM-L12-v2", "embedding_dimensionality": 384}) document_texts = [doc.page_content for doc in split_documents] embeddings = model.encode(document_texts) for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding return split_documents ``` #### Visualization Code: Two functions are provided for visualizing the embeddings using t-SNE and UMAP: ```python from sklearn.manifold import TSNE import umap import matplotlib.pyplot as plt def tsne_visualization(embeddings, parent_sections): tsne = TSNE(n_components=2, random_state=42) embeddings_2d = tsne.fit_transform(embeddings) # Plotting code... def umap_visualization(embeddings, parent_sections): umap_2d = umap.UMAP(n_components=2, random_state=42) embeddings_2d = umap_2d.fit_transform(embeddings) # Plotting code... ``` #### Conclusion: This stage emphasizes the importance of embeddings in RAG pipelines, allowing for modular and flexible integration with vector databases for efficient retrieval. For further details, refer to the complete code in the ZenML GitHub repository. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md === ### Summary of Retrieval-Augmented Generation (RAG) **Overview of RAG** Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to inform response generation. This method addresses LLM limitations, such as generating incorrect responses and handling extensive text inputs, by grounding outputs in relevant information. **RAG Pipeline Process** 1. **Retriever**: Identifies relevant documents from a corpus. 2. **Generator**: Produces a response based on the retrieved documents. This dual approach is effective for tasks requiring contextual understanding, such as question answering, summarization, and dialogue generation. It reduces the risk of inaccuracies and token limitations by focusing on a smaller, relevant document set, making it more cost-effective than pure generation-based methods. **When to Use RAG** RAG is ideal for: - Generating long-form responses needing contextual understanding. - Tasks like question answering, summarization, and dialogue generation. - Users new to LLMs, as it requires fewer resources and data compared to other methods. **Integration with ZenML** ZenML facilitates the creation of RAG pipelines, providing tools for: - Data ingestion and index management. - Tracking RAG artifacts (hyperparameters, model weights, etc.) in the Model Control Plane. - Scaling pipelines for larger document corpora and complex setups (e.g., finetuning embeddings, reranking documents). **Advantages of ZenML** - **Reproducibility**: Rerun pipelines to update documents or parameters while preserving previous versions. - **Scalability**: Deploy on cloud providers for larger document handling. - **Artifact Tracking**: Monitor and debug pipeline performance through metadata and visualizations in the ZenML dashboard. - **Maintainability**: Modular pipeline structure allows easy updates and experimentation. - **Collaboration**: Share pipelines and insights with team members. ZenML provides a structured approach to building RAG pipelines, setting the stage for more advanced functionalities in future sections. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md === ### RAG Pipelines with ZenML Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models to enhance the capabilities of Large Language Models (LLMs). This guide outlines the setup of RAG pipelines using ZenML, focusing on key components such as data ingestion, index store management, and tracking artifacts. #### Key Topics Covered: - **Purpose of RAG**: Addresses limitations of LLMs, which can generate incorrect responses, especially with ambiguous prompts, and have constraints on text length (most open-source LLMs handle fewer tokens than advanced models like Google's Gemini 1.5 Pro). - **Data Ingestion and Preprocessing**: Steps to prepare data for the RAG pipeline. - **Embeddings**: Utilizing embeddings to represent data, forming the basis for the retrieval mechanism. - **Vector Database**: Storing embeddings efficiently for retrieval. - **Artifact Tracking**: Using ZenML to track RAG-related artifacts. #### Conclusion: The guide culminates in demonstrating the integration of all components for basic RAG inference. For the latest documentation, refer to [ZenML's official site](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md === # Implementing Reranking in ZenML This documentation outlines how to implement reranking in ZenML within a RAG (Retrieval-Augmented Generation) pipeline. The reranker reorders retrieved documents based on their relevance to a given query. ## Adding Reranking The [`rerankers`](https://github.com/AnswerDotAI/rerankers/) package is used to integrate reranking into the pipeline. It provides a `Reranker` abstract class for custom implementations and supports various model types, including those from Hugging Face Hub and API-driven models. ### Example Code ```python from rerankers import Reranker ranker = Reranker('cross-encoder') texts = [ "I like to play soccer", "I like to play football", "War and Peace is a great book", "I love dogs", "Ginger cats aren't very smart", "I like to play basketball", ] results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` ### Sample Output ```python RankedResults( results=[ Result(doc_id=5, text='I like to play basketball', score=-0.465, rank=1), Result(doc_id=0, text='I like to play soccer', score=-0.735, rank=2), Result(doc_id=1, text='I like to play football', score=-0.968, rank=3), Result(doc_id=2, text='War and Peace is a great book', score=-5.402, rank=4), Result(doc_id=3, text='I love dogs', score=-5.586, rank=5), Result(doc_id=4, text="Ginger cats aren't very smart", score=-5.949, rank=6) ], query="What's your favorite sport?", has_scores=True ) ``` The reranker outputs documents ordered by relevance, with sports-related texts prioritized. ### Rerank Function A helper function can be added to rerank documents: ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` This function takes a query and a list of documents (content and URL) and returns reranked documents with their original URLs. ### Query Function The rerank function can be used in a querying function: ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: embedded_question = get_embeddings(question) db_conn = get_db_conn() num_docs = 20 if use_reranking else returned_sample_size top_similar_docs = get_topn_similar_docs(embedded_question, db_conn, n=num_docs, include_metadata=True) if use_reranking: reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[:returned_sample_size] urls = [doc[1] for doc in reranked_docs_and_urls] else: urls = [doc[1] for doc in top_similar_docs] return (question, url_ending, urls) ``` This function retrieves similar documents based on a question and optionally reranks them, returning the top five URLs. ### Evaluation After integrating reranking, evaluate its performance to assess the quality of retrieved documents. For full code exploration, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md === ### Evaluating Reranking Performance with ZenML This documentation outlines how to evaluate the performance of a reranking model in ZenML. The evaluation process involves comparing retrieval performance before and after applying reranking using established metrics. #### Key Steps in Evaluation 1. **Retrieval Evaluation Function**: The `perform_retrieval_evaluation` function assesses retrieval performance based on a sample of generated questions and relevant documents. It checks if the expected URL ending is present in the retrieved URLs and calculates the failure rate. ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) failures = sum(1 for item in sampled_dataset if not check_retrieval(item, use_reranking)) return round((failures / len(sampled_dataset)) * 100, 2) def check_retrieval(item, use_reranking): question = item["generated_questions"][0] url_ending = item["filename"].split("/")[-1] _, _, urls = query_similar_docs(question, url_ending, use_reranking) return url_ending in urls ``` 2. **Evaluation Steps**: Two steps are defined to evaluate retrieval performance with and without reranking: ```python @step def retrieval_evaluation_full(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=False) @step def retrieval_evaluation_full_with_reranking(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` 3. **Logging and Analysis**: The results of the evaluations can be logged for analysis, allowing users to inspect specific failures. 4. **Visualization**: Visualization of evaluation results can be achieved using a bar chart to compare failure rates and other metrics: ```python @step(enable_cache=False) def visualize_evaluation_results(...): scores = normalize_scores([...]) fig, ax = plt.subplots(figsize=(10, 6)) ax.barh(y_pos, scores, align="center") ax.set_title(f"Evaluation Metrics for {pipeline_run_name}") plt.tight_layout() return save_plot_to_image(fig) ``` #### Running the Evaluation Pipeline To run the evaluation pipeline: 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` 2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions. 3. Execute the evaluation pipeline: ```bash python run.py --evaluation ``` This will output results to the ZenML dashboard, allowing for further inspection of performance metrics and logs. ### Conclusion The documentation provides a clear framework for evaluating reranking models in ZenML, emphasizing the importance of comparing retrieval performance and visualizing results for better insights into model effectiveness. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/reranking.md === ### Summary: Adding Reranking to RAG Inference in ZenML Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section outlines how to integrate a reranker into your RAG inference pipeline in ZenML. **Key Points:** - Rerankers are optional but can significantly enhance the relevance and quality of retrieved documents, leading to better responses from LLMs. - The overall workflow includes data ingestion, preprocessing, embeddings generation, and retrieval, followed by evaluation metrics to assess performance. - Reranking is an additional step that can be added to the existing setup for improved performance. For more details and the latest updates, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md === ## Summary of Reranking in Retrieval-Augmented Generation (RAG) ### Definition Reranking refines the initial ranking of documents retrieved by a system, enhancing the relevance and quality of documents used for generating outputs in RAG. The initial retrieval typically employs sparse methods like BM25 or TF-IDF, which may not fully capture semantic meaning. Rerankers reorder documents based on features such as semantic similarity and relevance scores. ### Types of Rerankers 1. **Cross-Encoders**: - Input: Concatenated query and document. - Output: Relevance score. - Example: BERT-based models. - **Pros**: Effective interaction capture. - **Cons**: Computationally expensive. 2. **Bi-Encoders**: - Input: Separate encoders for query and document. - Output: Similarity score from independent embeddings. - **Pros**: More efficient. - **Cons**: Weaker interaction capture. 3. **Lightweight Models**: - Examples: Distilled models or small transformer variants. - **Pros**: Faster and smaller footprint for real-time use. ### Benefits of Reranking in RAG 1. **Improved Relevance**: Identifies the most relevant documents for accurate LLM responses. 2. **Semantic Understanding**: Captures semantic meaning, allowing retrieval of documents that may not match keywords exactly. 3. **Domain Adaptation**: Can be fine-tuned on specific data to enhance performance in particular industries. 4. **Personalization**: Tailors document retrieval based on user preferences and historical interactions. ### Next Steps The documentation will cover how to implement reranking in ZenML and integrate it into the RAG inference pipeline. For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === ### Summary: Adding Reranking to RAG Inference in ZenML **Overview**: Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. **Key Points**: - Rerankers are optional but can significantly improve the relevance and quality of retrieved documents, leading to better LLM responses. - The workflow includes data ingestion, preprocessing, embeddings generation, retrieval, and evaluation metrics. - Reranking is an additional step that optimizes the existing setup. **Visual Aid**: A workflow diagram illustrates the reranking process within the overall retrieval system. For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/generation.md === ### Summary of Generation Evaluation in RAG Pipeline **Overview**: This documentation outlines methods to evaluate the generation component of a Retrieval-Augmented Generation (RAG) pipeline, focusing on generating answers based on retrieved context. Evaluation is subjective and involves both handcrafted tests and automated assessments using another LLM. #### Handcrafted Evaluation Tests - Create examples to verify generated outputs include or exclude specific terms based on known correct or incorrect responses. - Example tests include checking if supported orchestrators like "Airflow" and "Kubeflow" are present while excluding unsupported ones like "Flyte" and "Prefect." - A starter set of tests is available [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py#L28-L55). **Test Tables**: - **Bad Answers**: Questions that should not include certain terms. - **Bad Immediate Responses**: Questions that should not yield certain immediate responses. - **Good Responses**: Questions that should include specific terms. **Example Code for Testing Bad Words**: ```python class TestResult(BaseModel): success: bool question: str keyword: str = "" response: str def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: question = item["question"] bad_words = item["bad_words"] response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) for word in bad_words: if word in response: return TestResult(success=False, question=question, keyword=word, response=response) return TestResult(success=True, question=question, response=response) ``` **Running Tests**: ```python def run_tests(test_data: list, test_function: Callable) -> float: failures = 0 total_tests = len(test_data) for item in test_data: test_result = test_function(item) if not test_result.success: logging.error(f"Test failed for question: '{test_result.question}'. Found word: '{test_result.keyword}'. Response: '{test_result.response}'") failures += 1 failure_rate = (failures / total_tests) * 100 logging.info(f"Total tests: {total_tests}. Failures: {failures}. Failure rate: {failure_rate}%") return round(failure_rate, 2) ``` #### Automated Evaluation Using Another LLM - Use a separate LLM to assess the quality of generated outputs on a scale of 1 to 5 for categories like toxicity, faithfulness, helpfulness, and relevance. - A Pydantic model is set up to validate scores. **Pydantic Model**: ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) faithfulness: conint(ge=1, le=5) helpfulness: conint(ge=1, le=5) relevance: conint(ge=1, le=5) ``` **Example Code for LLM Judged Test**: ```python def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) prompt = f"Analyze the following text and provide scores for toxicity, faithfulness, helpfulness, and relevance. **Text:** {response} **Context:** {context} **Output format:** {{\"toxicity\": int, \"faithfulness\": int, \"helpfulness\": int, \"relevance\": int}}" response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) return LLMJudgedTestResult(**json.loads(response["choices"][0]["message"]["content"].strip())) ``` **Running LLM Judged Tests**: ```python def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) total_scores = {'toxicity': 0, 'faithfulness': 0, 'helpfulness': 0, 'relevance': 0} total_tests = len(dataset) for item in dataset: question = item["generated_questions"][0] context = item["page_content"] result = test_function(question, context) total_scores['toxicity'] += result.toxicity total_scores['faithfulness'] += result.faithfulness total_scores['helpfulness'] += result.helpfulness total_scores['relevance'] += result.relevance return tuple(round(total_scores[key] / total_tests, 3) for key in total_scores) ``` #### Additional Notes - The evaluation process can be improved by implementing retries for JSON outputs, using OpenAI's JSON mode, batch processing, and increasing sample size. - Consider using frameworks like `ragas`, `trulens`, DeepEval, and UpTrain for more sophisticated evaluations. - The evaluation of both retrieval and generation components allows tracking improvements in the RAG pipeline. For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the `eval_e2e.py` file. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md === ### Summary of RAG Evaluation Implementation This documentation outlines how to implement evaluation for a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of Python code. It builds upon a previous example of a basic RAG pipeline. The full code is available in the project repository. #### Key Components 1. **Evaluation Data**: A list of questions and their expected answers is defined for testing the RAG pipeline. ```python eval_data = [ {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones..."}, {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, ] ``` 2. **Evaluation Functions**: - **Retrieval Evaluation**: Checks if any words from the expected answer appear in the retrieved chunks. ```python def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) ``` - **Generation Evaluation**: Uses OpenAI's API to assess the relevance and accuracy of the generated answer. ```python def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[{"role": "system", "content": "You are an evaluation judge..."}, {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}"}], model="gpt-3.5-turbo", ) return chat_completion.choices[0].message.content.strip().lower() == "yes" ``` 3. **Scoring**: The code iterates through the evaluation data, calculates retrieval and generation scores, and computes their accuracies. ```python retrieval_scores = [evaluate_retrieval(item["question"], item["expected_answer"], corpus) for item in eval_data] generation_scores = [evaluate_generation(item["question"], item["expected_answer"], answer_question(item["question"], corpus)) for item in eval_data] retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` #### Results The example demonstrates achieving 100% accuracy for both retrieval and generation components. Further sections will elaborate on more sophisticated implementations of RAG evaluation. For the complete code, refer to the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md === ### Summary of RAG System Evaluation Documentation This documentation provides guidance on evaluating the performance of a Retrieval-Augmented Generation (RAG) system. It emphasizes the importance of separating the evaluation process from the main pipeline that generates embeddings, allowing for better management of concerns. #### Key Points: 1. **Evaluation Pipeline**: - The evaluation is structured as a separate pipeline that runs after the main embedding generation. This separation is a best practice. - Depending on the use case, evaluations can also be integrated into the main pipeline to serve as a gating mechanism for production readiness. 2. **Local vs. Cloud LLM Judge**: - For development, consider using a local LLM judge for quicker iterations. - Use cloud LLMs (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) for comprehensive evaluations to manage costs effectively. 3. **Human Review**: - Automated evaluations save time but do not replace the need for human oversight. Results from the LLM judge require careful review to ensure the RAG system performs as expected. 4. **Evaluation Frequency**: - The frequency and depth of evaluations should balance cost and the need for rapid iteration. - Quick tests (e.g., retrieval system tests) can be run frequently, while more expensive tests (e.g., LLM judge) should be less frequent. 5. **Next Steps**: - The documentation suggests adding a reranker to enhance retrieval performance without retraining embeddings. #### Practical Implementation: To run the evaluation pipeline: 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` 2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions. 3. Execute the evaluation pipeline: ```bash python run.py --evaluation ``` Results will be output to the console, and progress can be monitored via the dashboard. This concise summary retains critical technical details while eliminating redundancy, ensuring clarity for further inquiries. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md === ### Summary of Retrieval Evaluation in RAG Pipeline The retrieval component of a Retrieval-Augmented Generation (RAG) pipeline is crucial for finding relevant documents based on incoming queries. This documentation outlines methods to evaluate the performance of this component, focusing on the accuracy of semantic search. #### Key Evaluation Methods 1. **Manual Evaluation with Handcrafted Queries**: - Create specific queries to check if the retrieval component can retrieve known relevant documents. - Example queries include: - "How do I get going with the Label Studio integration?" - "How can I write my own custom materializer?" - The retrieval process involves encoding the query into a vector and querying a PostgreSQL database for similar vectors. **Code Example**: ```python def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) top_similar_docs_urls = get_topn_similar_docs(embedded_question, db_conn, n=5, only_urls=True) urls = [url[0] for url in top_similar_docs_urls] return (question, url_ending, urls) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: failures = sum(1 for pair in question_doc_pairs if all(pair["url_ending"] not in url for url in query_similar_docs(pair["question"], pair["url_ending"])[2])) return round((failures / len(question_doc_pairs)) * 100, 2) ``` 2. **Automated Evaluation with Synthetic Queries**: - Use a language model (LLM) to generate questions based on document chunks. - The generated questions are then evaluated against the retrieval component to check if the original document URLs appear in the top results. **Code Example**: ```python def generate_question(chunk: str, local: bool = False) -> str: model = LOCAL_MODEL if local else "gpt-3.5-turbo" response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) return response.choices[0].message.content @step def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) failures = sum(1 for item in dataset if all(item["filename"].split("/")[-1] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2])) return round((failures / len(dataset)) * 100, 2) ``` #### Results and Insights - Initial tests showed a 20% failure rate with handcrafted queries and 16% with synthetic queries, indicating room for improvement. - Suggested improvements include: - Generating more diverse questions. - Using semantic similarity metrics for nuanced evaluation. - Comparative evaluation of different retrieval techniques. - Conducting error analysis to identify patterns in failures. #### Conclusion The evaluation process for the retrieval component is vital for improving the RAG pipeline's performance. Both manual and automated methods provide insights into the effectiveness of the retrieval system, guiding iterative enhancements. Future evaluations will also focus on the generation component to ensure the overall quality of the system's outputs. For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/README.md === ### Summary of Evaluation and Metrics for RAG Pipeline This documentation discusses evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating RAG pipelines is essential for performance assessment and improvement identification. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to their subjective nature. A holistic evaluation approach is necessary since a RAG pipeline encompasses more than just a model. #### Key Evaluation Areas: 1. **Retrieval Evaluation**: Assessing the relevance of retrieved documents or document chunks to the query. 2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for specific use cases. #### Evaluation Considerations: - The evaluation metrics depend on the specific use case and acceptable error levels. For example, a user-facing chatbot may require: - Relevance of retrieved documents. - Coherence and helpfulness of generated answers. - Absence of hate speech or toxic language. The generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the entire system output. #### Best Practices: - In production, establish a baseline by evaluating a raw LLM model (without RAG components) and compare it to the RAG pipeline performance to gauge the added value of retrieval and generation components. #### Code Example: A high-level code example demonstrates the two main evaluation areas, with further sections providing detailed guidance on practical evaluation methods and result interpretation. For the latest documentation, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/user-guide/cloud-guide/cloud-guide.md === ### ZenML Cloud Guide Summary This section provides guidance on connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is a configuration of tools and infrastructure for running pipelines. ZenML acts as a translation layer, enabling code execution across different stacks. **Key Points:** - **Stack Registration:** This guide focuses on registering a stack, assuming the necessary resources for running pipelines are already provisioned. - **Provisioning Infrastructure:** You can provision infrastructure manually or use: - **In-browser stack deployment wizard** - **Stack registration wizard** - **ZenML Terraform modules** For further details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/reference/community-and-content.md === ### Community & Content Overview ZenML offers various channels for community engagement and support, enhancing understanding of the framework. #### Slack Channel - **Link**: [ZenML Slack](https://zenml.io/slack) - Main hub for community interaction, support, and sharing projects. Many questions are often answered here. #### Social Media - **LinkedIn**: [ZenML LinkedIn](https://www.linkedin.com/company/zenml) - **Twitter**: [ZenML Twitter](https://twitter.com/zenml_io) - Follow for updates on releases and MLOps. Engagement through comments and shares is encouraged. #### YouTube Channel - **Link**: [ZenML YouTube](https://www.youtube.com/c/ZenML) - Offers video tutorials and workshops for visual learners. #### Public Roadmap - **Link**: [ZenML Roadmap](https://zenml.io/roadmap) - Community feedback shapes development. Users can suggest and prioritize features. #### Blog - **Link**: [ZenML Blog](https://zenml.io/blog/) - Articles on tool implementation, new features, and insights from the team. #### Podcast - **Link**: [ZenML Podcast](https://podcast.zenml.io/) - Features discussions with industry leaders on machine learning and MLOps. #### Newsletter - **Link**: [ZenML Newsletter](https://zenml.io/newsletter-signup) - Subscribe for updates on open-source tooling and ZenML news. ================================================== === File: docs/book/reference/how-do-i.md === # How do I...? **Last Updated**: December 13, 2023 ### Common Questions: - **Contribute to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small features/bug fixes, open a pull request. For larger contributions, consider [posting in Slack](https://zenml.io/slack/) or [creating an issue](https://github.com/zenml-io/zenml/issues/new/choose). - **Add Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, refer to dedicated sections, e.g., [custom orchestrators](../component-guide/orchestrators/custom.md). - **Mitigate Dependency Clashes**: Visit our [handling dependencies documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). - **Deploy Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for stack components covers deployment on popular cloud providers. - **Deploy ZenML on Internal Clusters**: Check the documentation on [self-hosted ZenML deployments](../getting-started/deploying-zenml/README.md). - **Hyperparameter Tuning**: Refer to our [hyperparameter tuning guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). - **Reset ZenML Client**: Use `zenml clean` to reset your client (destructive action). Contact us on [Slack](https://zenml.io/slack/) for assistance. - **Dynamic Pipelines and Steps**: Read about composing steps and pipelines in our [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md) and check related code examples in the hyperparameter tuning guide. - **Use Project Templates**: Project templates help you start quickly. The `starter` template is recommended for most use cases. - **Upgrade ZenML Client/Server**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, see the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). - **Use Specific Stack Components**: Refer to the [component guide](../component-guide/README.md) for tips on using each integration and component with ZenML. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/reference/faq.md === ### ZenML FAQ Summary **Overview**: ZenML was developed to address challenges in deploying machine-learning models in production, providing a simple, production-ready solution for large-scale ML pipelines. #### Key Points: - **Purpose**: ZenML is not just another orchestrator like Airflow or Kubeflow; it's a framework that enables running pipelines on various orchestrators while coordinating other ML system components. - **Integrations**: ZenML supports numerous tools and integrations. For details, refer to the [component guide](https://docs.zenml.io) and the [integration test code](https://github.com/zenml-io/zenml/tree/main/tests/integration/examples). Users can suggest features via the [roadmap](https://zenml.io/roadmap) and contribute to the project. - **Platform Support**: - **Windows**: Officially supported via WSL. Some features may not work outside WSL. - **Apple Silicon**: Supported with the environment variable: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is necessary for local server use but not for CLI operations connecting to a deployed server. - **Customization**: Users can extend ZenML for custom tools; a guide is available [here](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). - **Community Contribution**: To contribute, select issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). - **Community Engagement**: Join the [Slack group](https://zenml.io/slack/) for discussions and support. - **License**: ZenML is licensed under the Apache License Version 2.0. Full license details are in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions are also licensed under this agreement. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/reference/environment-variables.md === # Environment Variables for ZenML ZenML allows control over its behavior through several pre-defined environment variables: ## Logging Verbosity Set the logging level: ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` ## Disable Step Logs To prevent storing step logs (which may impact performance): ```bash export ZENML_DISABLE_STEP_LOGS_STORAGE=true # Set to true to disable ``` ## ZenML Repository Path Specify the repository path: ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` ## Analytics Opt-Out To opt out of usage analytics: ```bash export ZENML_ANALYTICS_OPT_IN=false ``` ## Debug Mode Enable developer mode: ```bash export ZENML_DEBUG=true ``` ## Active Stack Set the active stack by UUID: ```bash export ZENML_ACTIVE_STACK_ID= ``` ## Prevent Pipeline Execution To prevent pipeline execution: ```bash export ZENML_PREVENT_PIPELINE_EXECUTION=true # Set to true to prevent execution ``` ## Disable Rich Traceback Disable rich traceback: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` ## Disable Colorful Logging To disable colorful logging: ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` Note: Disabling on the client environment also affects remote orchestrators. ## ZenML Global Config Path Set the global config file path: ```bash export ZENML_CONFIG_PATH=/path/to/somewhere ``` ## Client Configuration Connect the ZenML Client to a server using: ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` For more details on server configuration, refer to the ZenML Server documentation. ================================================== === File: docs/book/reference/api-reference.md === # ZenML API Reference Summary The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local usage (via `zenml login --local`), the documentation is available at `http://127.0.0.1:8237/docs`. ## Accessing the API Programmatically with a Bearer Token To use the ZenML server API programmatically, follow these steps: 1. **Create a Service Account**: ```shell zenml service-account create myserviceaccount ``` This command generates a ``. 2. **Obtain an Access Token**: Use the `/api/v1/login` endpoint: ```shell curl -X 'POST' \ '/api/v1/login' \ -H 'accept: application/json' \ -H 'Content-Type: application/x-www-form-urlencoded' \ -d 'grant_type=zenml-api-key&username=&password=' ``` The response will include an `access_token`: ```json { "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", "token_type": "bearer", "expires_in": 3600 } ``` 3. **Make API Requests**: Use the access token in subsequent commands: ```shell curl -X 'GET' \ '/api/v1/pipelines?hydrate=false&name=training' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` This summary provides essential steps and commands for accessing the ZenML API programmatically, ensuring critical information is retained while maintaining conciseness. ================================================== === File: docs/book/reference/python-client.md === ### ZenML Python Client Overview The ZenML Python `Client` enables programmatic interaction with ZenML resources, such as pipelines, runs, and stacks, stored in a database within your ZenML instance. For other programming languages, resources can be accessed via REST API endpoints. ### Usage Example To fetch the last 10 pipeline runs for the current stack: ```python from zenml.client import Client client = Client() my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) for pipeline_run in my_runs_on_current_stack: print(pipeline_run.name) ``` ### Main ZenML Resources 1. **Pipelines**: Tracked pipelines. 2. **Pipeline Runs**: Information on executed runs. 3. **Run Templates**: Templates for running pipelines. 4. **Step Runs**: Steps within pipeline runs. 5. **Artifacts**: Artifacts generated during runs. 6. **Schedules**: Metadata for scheduled runs. 7. **Builds**: Docker images for pipelines. 8. **Code Repositories**: Connected git repositories. 9. **Stacks**: Registered stacks. 10. **Stack Components**: Components like orchestrators and artifact stores. 11. **Flavors**: Available stack component flavors. 12. **User**: Registered users. 13. **Secrets**: Authentication secrets. 14. **Service Connectors**: Infrastructure connection setups. ### Client Methods #### Reading and Writing Resources - **List Methods**: Retrieve lists of resources. ```python client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) ``` Returns a `Page` of resources, defaulting to 50 results. Modify page size with `size` or fetch subsequent pages with `page`. - **Get Methods**: Fetch specific resources by ID, name, or prefix. ```python client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # By ID client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # By Name ``` - **Create, Update, Delete Methods**: Available for select resources. Refer to the Client SDK documentation for specifics. #### Active User and Stack Access current user and stack information: ```python client.active_user client.active_stack_model ``` ### Resource Models ZenML Client methods return **Response Models**, which are Pydantic Models ensuring data validation. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. **Request, Update, and Filter Models** are used for server API endpoints but not for Client methods. For details on model fields, refer to the ZenML Models SDK Documentation. ### Important Links - [Client SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/) - [ZenML Models SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-models/#zenml.models) ================================================== === File: docs/book/reference/global-settings.md === ### ZenML Global Settings Overview The **ZenML Global Config Directory** stores global settings for ZenML installations. Its location varies by operating system: - **Linux:** `~/.config/zenml` - **Mac:** `~/Library/Application Support/zenml` - **Windows:** `C:\Users\%USERNAME%\AppData\Local\zenml` You can override the default path using the `ZENML_CONFIG_PATH` environment variable. To retrieve the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` **Warning:** Avoid manually altering files in the config directory. Use CLI commands for management: - `zenml analytics` - Manage analytics settings. - `zenml clean` - Reset configuration to default. - `zenml downgrade` - Downgrade ZenML version. Upon first run, ZenML initializes the config directory and creates a default stack: ``` Initializing the ZenML global configuration version to 0.13.2 Creating default user 'default' ... Creating default stack for user 'default'... ``` #### Global Config Directory Structure After initialization, the directory layout includes: ``` /home/stefan/.config/zenml ├── config.yaml # Global Configuration Settings └── local_stores # Local data storage for stack components ├── # Local Store for components └── default_zen_store └── zenml.db # SQLite database for ZenML data ``` **Key Files:** 1. **config.yaml:** Stores global settings like client ID, database config, and active stack. ```yaml active_stack_id: ... analytics_opt_in: true store: database: ... url: ... username: ... user_id: d980f13e-05d1-4765-92d2-1dc7eb7addb7 version: 0.13.2 ``` 2. **local_stores:** Contains subdirectories for local stack components. 3. **zenml.db:** Default SQLite database for storing stack information. #### Usage Analytics ZenML collects anonymized usage statistics to improve the tool. Users can opt out with: ```bash zenml analytics opt-out ``` Analytics are aggregated via [Segment](https://segment.com) and processed through a ZenML analytics server. #### Version Mismatch and Downgrading If you downgrade ZenML and encounter a version mismatch error: ```shell `The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` ``` To align versions, run: ```shell zenml downgrade ``` **Warning:** Downgrading may lead to unexpected behavior. To reset the configuration, use: ```shell zenml clean ``` This command purges the local database and reinitializes the global configuration. ================================================== === File: docs/book/component-guide/integration-overview.md === # ZenML Third-Party Integrations Overview ZenML provides integrations with various MLOps tools to enhance ML workflows by categorizing the MLOps stack and offering concrete implementations. This allows users to orchestrate pipelines with tools like [Airflow](orchestrators/airflow.md) and [Kubeflow](orchestrators/kubeflow.md), track experiments using [MLflow](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models with [Seldon Core](model-deployers/seldon.md). ZenML enables flexibility without vendor lock-in, allowing easy switching of tools as requirements evolve. ## Available Integrations A comprehensive list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) or in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ## Installing Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs the preferred versions via pip: ```bash pip install kubeflow== mlflow== seldon== ``` The `-y` flag auto-confirms installation prompts. For a complete list of CLI commands, run `zenml integration --help`. ### Using `uv` for Package Installation You can utilize [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: ```bash zenml integration install kubeflow --uv ``` Ensure `uv` is installed, as this is an experimental feature. ## Upgrading Integrations To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` The `-y` flag auto-confirms upgrades. If no integrations are specified, all installed integrations will be upgraded. ## Community Contributions ZenML prioritizes integrations based on community needs, visible on the [public roadmap](https://zenml.io/roadmap). Contributions are welcome; refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for details. ================================================== === File: docs/book/component-guide/component-guide.md === # Overview of MLOps Components in ZenML ZenML categorizes MLOps tools into distinct components to clarify their roles in the pipeline. These components are standardized abstractions that streamline workflows. Users can implement custom components or utilize built-in integrations. ## Supported Stack Components | **Type of Stack Component** | **Description** | |------------------------------|-----------------| | [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | | [Container Registry](./container-registries/container-registries.md) | Stores container images | | [Step Operator](./step-operators/step-operators.md) | Executes steps in specific environments | | [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | | [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | | [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Alerter](./alerters/alerters.md) | Sends alerts through channels | | [Annotator](./annotators/annotators.md) | Labels and annotates data | | [Data Validator](./data-validators/data-validators.md) | Validates data and models | | [Image Builder](./image-builders/image-builders.md) | Builds container images | | [Model Registry](./model-registries/model-registries.md) | Manages ML models | Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional based on MLOps maturity. ## Custom Component Flavors Users can create custom component flavors to tailor ZenML's behavior. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific component type guides (e.g., [custom orchestrator guide](orchestrators/custom.md)). ================================================== === File: docs/book/component-guide/README.md === # Overview of ZenML MLOps Components and Integrations ZenML categorizes MLOps tools into stack components to simplify their integration into your workflow. Each stack component serves a specific function in the MLOps pipeline, allowing teams to standardize their processes. The main stack components include: | **Type of Stack Component** | **Description** | |-----------------------------|------------------| | [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts generated by pipelines | | [Container Registry](container-registries/container-registries.md) | Stores container images | | [Data Validator](data-validators/data-validators.md) | Validates data and models | | [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Model Deployer](model-deployers/model-deployers.md) | Manages online model serving | | [Step Operator](step-operators/step-operators.md) | Executes pipeline steps in specific environments | | [Alerter](alerters/alerters.md) | Sends alerts through designated channels | | [Image Builder](image-builders/image-builders.md) | Builds container images | | [Annotator](annotators/annotators.md) | Labels and annotates data | | [Model Registry](model-registries/model-registries.md) | Manages ML models | | [Feature Store](feature-stores/feature-stores.md) | Manages data/features | Every ZenML pipeline requires at least an orchestrator and an artifact store, with other components being optional based on the pipeline's maturity. ## Custom Component Flavors Users can create custom component flavors to tailor ZenML's behavior. For guidance, refer to the [general guide](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific component type guides. ## Integrations ZenML enhances MLOps pipelines by integrating with various tools, allowing seamless transitions between local and deployed environments. Notable integrations include: - **Orchestrators**: [Airflow](orchestrators/airflow.md), [Kubeflow](orchestrators/kubeflow.md) - **Experiment Trackers**: [MLflow Tracking](experiment-trackers/mlflow.md), [Weights & Biases](experiment-trackers/wandb.md) - **Model Deployment**: [MLflow](model-deployers/mlflow.md), [Seldon Core](model-deployers/seldon.md) ZenML's architecture prevents vendor lock-in, enabling easy tool swaps as requirements evolve. ### Available Integrations A comprehensive list of ZenML integrations can be found on the [integrations webpage](https://zenml.io/integrations) and in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ### Installing Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs preferred versions via pip. The `-y` flag auto-confirms installations. ### Upgrade Integrations To upgrade integrations, run: ```bash zenml integration upgrade mlflow pytorch -y ``` This command upgrades specified integrations or all if none are specified. ### Community Contributions ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for details. ================================================== === File: docs/book/component-guide/data-validators/evidently.md === ### Summary of Evidently Integration with ZenML **Evidently Overview** Evidently is an open-source library for monitoring and debugging machine learning models through data profiling and visualization. It supports data quality, data drift, model drift, and model performance analysis, generating reports that can be used for automated corrective actions or visual interpretation. **Key Features** - **Data Quality Reports**: Analyze feature statistics and behavior for a single dataset or compare two datasets. - **Data Drift Reports**: Detect changes in feature distribution between two datasets with identical schemas. - **Target Drift Reports**: Explore changes in target functions or model predictions. - **Performance Reports**: Evaluate model performance using datasets with target and prediction columns. **Deployment** To use the Evidently Data Validator in ZenML, install the integration: ```shell zenml integration install evidently -y ``` Register the data validator: ```shell zenml data-validator register evidently_data_validator --flavor=evidently zenml stack register custom_stack -dv evidently_data_validator ... --set ``` **Usage** Evidently profiling functions accept `pandas.DataFrame` datasets and generate reports. Key usage methods include: 1. **Standard Report Step**: Recommended for ease of use. 2. **Custom Step Implementation**: Offers flexibility in pipeline steps. 3. **Direct Library Use**: Full control over Evidently features. **Example of Evidently Report Step**: ```python from zenml.integrations.evidently.steps import evidently_report_step text_data_report = evidently_report_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping( target="Rating", numerical_features=["Age", "Positive_Feedback_Count"], categorical_features=["Division_Name", "Department_Name", "Class_Name"], text_features=["Review_Text", "Title"], ), metrics=[ EvidentlyMetricConfig.metric("DataQualityPreset"), EvidentlyMetricConfig.metric("TextOverviewPreset", column_name="Review_Text"), ], download_nltk_data=True, ), ) ``` **Data Validation** Evidently can also run automated data validation tests. Similar to profiling, it can be integrated via: 1. **Standard Test Step**: Easiest method. 2. **Custom Implementation**: More flexibility. 3. **Direct Library Use**: Full control. **Example of Evidently Test Step**: ```python from zenml.integrations.evidently.steps import evidently_test_step text_data_test = evidently_test_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping( target="Rating", numerical_features=["Age", "Positive_Feedback_Count"], categorical_features=["Division_Name", "Department_Name", "Class_Name"], text_features=["Review_Text", "Title"], ), tests=[EvidentlyTestConfig.test("DataQualityTestPreset")], download_nltk_data=True, ), ) ``` **Direct Use of Evidently** You can call Evidently directly in your custom steps: ```python from evidently.report import Report @step def data_profiler(dataset: pd.DataFrame): report = Report(metrics=[metric_preset.DataQualityPreset()]) report.run(current_data=dataset, reference_data=dataset) return report.json(), HTMLString(report.show(mode="inline").data) ``` **Visualizing Reports** Evidently reports can be visualized in the ZenML dashboard or Jupyter notebooks using: ```python def visualize_results(pipeline_name: str, step_name: str): pipeline = Client().get_pipeline(pipeline=pipeline_name) evidently_step = pipeline.last_run.steps[step_name] evidently_step.visualize() ``` For further details, refer to the [Evidently documentation](https://docs.evidentlyai.com/reference/all-metrics) and [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-evidently). ================================================== === File: docs/book/component-guide/data-validators/deepchecks.md === ### Summary of Deepchecks Integration with ZenML **Overview** Deepchecks is an open-source library integrated with ZenML to perform data integrity, data drift, model drift, and model performance tests on datasets and models in ZenML pipelines. The results can trigger automated corrective actions or be visualized for evaluation. **Use Cases** Deepchecks is suitable for: - **Data Integrity Checks**: Identify issues like missing values and conflicting labels. - **Data Drift Checks**: Detect data skew by comparing target and reference datasets. - **Model Performance Checks**: Evaluate model performance using confusion matrices and error analysis. - **Multi-Model Performance Reports**: Summarize performance scores across multiple models. **Supported Formats** - **Tabular Data**: `pandas.DataFrame` for datasets and `sklearn.base.ClassifierMixin` for models. - **Computer Vision Data**: `torch.utils.data.dataloader.DataLoader` for datasets and `torch.nn.Module` for models. **Installation** To install the Deepchecks integration: ```shell zenml integration install deepchecks -y ``` **Registering the Data Validator** Add the Deepchecks Data Validator to your stack: ```shell zenml data-validator register deepchecks_data_validator --flavor=deepchecks zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` **Usage in Pipelines** Deepchecks validation checks are categorized into four types based on input requirements: 1. **Data Integrity Checks**: Single dataset input. 2. **Data Drift Checks**: Two datasets (target and reference). 3. **Model Validation Checks**: Single dataset and a model. 4. **Model Drift Checks**: Two datasets and a model. **Standard Steps** ZenML provides four standard steps for Deepchecks: - `deepchecks_data_integrity_check_step` - `deepchecks_data_drift_check_step` - `deepchecks_model_validation_check_step` - `deepchecks_model_drift_check_step` Example of a data integrity check step: ```python from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step data_validator = deepchecks_data_integrity_check_step.with_options( parameters=dict(dataset_kwargs=dict(label="target", cat_features=[])) ) ``` **Customizing Checks** You can specify custom checks and parameters: ```python deepchecks_data_integrity_check_step( check_list=[ DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES, DeepchecksDataIntegrityCheck.TABULAR_DATA_DUPLICATES, ], dataset=... ) ``` **Docker Configuration for Remote Orchestrators** For remote orchestrators, extend the Docker image to include `opencv2` dependencies: ```shell ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base RUN apt-get update RUN apt-get install ffmpeg libsm6 libxext6 -y ``` **Visualizing Results** Results can be visualized in the ZenML dashboard or Jupyter notebooks: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline=pipeline_name) last_run = pipeline.last_run step = last_run.steps[step_name] step.visualize() ``` **Conclusion** Deepchecks provides a robust framework for validating data and models within ZenML pipelines, enabling users to maintain data integrity and model performance with minimal configuration. For further details, refer to the official Deepchecks documentation. ================================================== === File: docs/book/component-guide/data-validators/data-validators.md === # Data Validators Data Validators are essential tools in machine learning (ML) for ensuring data quality and monitoring model performance throughout the ML project lifecycle. They help prevent issues that can arise from poor data quality, which can lead to unreliable model outputs. ## Key Features - **Data Profiling**: Analyzes data characteristics. - **Data Integrity Testing**: Ensures data consistency and accuracy. - **Drift Detection**: Monitors for data and model drift during various pipeline stages (data ingestion, model training, evaluation, inference). - **Visualization**: Generates profiles and performance reports for analysis and corrective action. ## Usage Scenarios - **Early Development**: Log data quality and model performance. - **Regular Data Ingestion**: Conduct integrity checks to catch issues early. - **Continuous Training**: Compare new data and model performance against references. - **Batch and Online Inference**: Analyze data drift and detect discrepancies between training and serving data. ## Data Validator Flavors Different Data Validators are available, each with unique features: | Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | |----------------|----------|------------|-------------|-------|--------------------| | [Deepchecks](deepchecks.md) | Data quality, drift, performance | Tabular: `pandas.DataFrame`, CV: `torch.utils.data.DataLoader` | Tabular: `sklearn.base.ClassifierMixin`, CV: `torch.nn.Module` | Validation tests for pipelines | `deepchecks` | | [Evidently](evidently.md) | Data quality, drift, performance | Tabular: `pandas.DataFrame` | N/A | Generates reports and visualizations | `evidently` | | [Great Expectations](great-expectations.md) | Profiling, quality | Tabular: `pandas.DataFrame` | N/A | Data testing and documentation | `great_expectations` | | [Whylogs/WhyLabs](whylogs.md) | Data drift | Tabular: `pandas.DataFrame` | N/A | Generates profiles for WhyLabs | `whylogs` | To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` ## Implementation Steps 1. **Configuration**: Add a Data Validator to your ZenML stack. 2. **Integration**: Utilize built-in validation steps in your pipelines or use libraries directly in custom steps. 3. **Artifact Management**: Access and visualize validation artifacts in subsequent pipeline steps or retrieve them later. For detailed usage instructions, refer to the specific Data Validator documentation. ================================================== === File: docs/book/component-guide/data-validators/whylogs.md === ### Summary of Whylogs/WhyLabs Profiling with ZenML **Overview**: The whylogs/WhyLabs integration in ZenML allows for the collection and visualization of data statistics through data profiling. This integration uses the open-source library whylogs to create statistical summaries, known as whylogs profiles, which can be used for data validation, drift detection, and model performance monitoring. #### Key Features: - **Data Quality Validation**: Ensures model inputs meet quality standards. - **Data Drift Detection**: Identifies changes in model input features over time. - **Model Drift Detection**: Monitors training-serving skew and performance degradation. #### Installation: To use the whylogs Data Validator, install the integration: ```shell zenml integration install whylogs -y ``` #### Basic Setup: Register the whylogs Data Validator: ```shell zenml data-validator register whylogs_data_validator --flavor=whylogs zenml stack register custom_stack -dv whylogs_data_validator ... --set ``` For WhyLabs logging capabilities, create a ZenML Secret for authentication: ```shell zenml secret create whylabs_secret \ --whylabs_default_org_id= \ --whylabs_api_key= zenml data-validator register whylogs_data_validator --flavor=whylogs \ --authentication_secret=whylabs_secret ``` #### Custom Pipeline Steps: To enable WhyLabs logging in custom steps, set `upload_to_whylabs` to `True`: ```python @step( settings={ "data_validator": WhylogsDataValidatorSettings( enable_whylabs=True, dataset_id="model-1" ) } ) def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: X, y = datasets.load_diabetes(return_X_y=True, as_frame=True) df = pd.merge(X, y, left_index=True, right_index=True) profile = why.log(pandas=df).profile().view() return df, profile ``` #### Using Whylogs: 1. **Standard Step**: Use `WhylogsProfilerStep` for basic profiling. ```python from zenml.integrations.whylogs.steps import get_whylogs_profiler_step train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") ``` 2. **Custom Data Validator**: Call methods from `WhylogsDataValidator` directly. ```python data_validator = WhylogsDataValidator.get_active_data_validator() profile = data_validator.data_profiling(dataset) ``` 3. **Direct Library Use**: Utilize whylogs directly in custom steps. ```python results = why.log(dataset) profile = results.profile() ``` #### Visualizing Profiles: Profiles can be visualized in the ZenML dashboard or in Jupyter notebooks using: ```python def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None) -> None: pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") whylogs_step = pipe.last_run.steps[step_name] whylogs_step.visualize() ``` ### Conclusion: The whylogs integration in ZenML provides a robust framework for data profiling, enabling users to monitor data quality, detect drift, and visualize statistics effectively. For detailed usage, refer to the official [whylogs documentation](https://whylogs.readthedocs.io/en/latest/index.html). ================================================== === File: docs/book/component-guide/data-validators/great-expectations.md === ### Summary of Great Expectations Integration with ZenML **Overview**: Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to run data validation in pipelines using `pandas.DataFrame`. **Key Features**: - **Data Profiling**: Automatically generates validation rules (Expectations) from input datasets. - **Data Quality**: Runs predefined or inferred validation rules against datasets. - **Data Docs**: Generates human-readable documentation of validation rules and results. **Deployment Options**: 1. **ZenML Managed Configuration**: ZenML initializes and manages Great Expectations configuration. Expectation Suites and Validation Results are stored in the ZenML Artifact Store. ```shell zenml integration install great_expectations -y zenml data-validator register ge_data_validator --flavor=great_expectations zenml stack register custom_stack -dv ge_data_validator ... --set ``` 2. **Use Existing Configuration**: Point to an existing `great_expectations.yaml` file. ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations ``` 3. **Migrate Configuration to ZenML**: Load existing configuration using the `@` operator. ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml ``` **Advanced Configuration**: - `configure_zenml_stores`: Automatically updates configuration to use ZenML Artifact Store. - `configure_local_docs`: Configures a local Data Docs site for visualization. **Usage in Pipelines**: - **Data Profiler Step**: Automatically generates an Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step ge_profiler_step = great_expectations_profiler_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` - **Data Validator Step**: Validates datasets against an Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step ge_validator_step = great_expectations_validator_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` **Direct Usage of Great Expectations**: Users can directly interact with the Great Expectations library while leveraging ZenML's serialization and versioning features. ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @step def create_custom_expectation_suite() -> ExpectationSuite: context = GreatExpectationsDataValidator.get_data_context() suite = context.create_expectation_suite("custom_suite") # Add expectations... context.save_expectation_suite(suite) context.build_data_docs() return suite ``` **Visualization**: Results can be visualized in the ZenML dashboard or via Jupyter notebooks using the `artifact.visualize()` method. ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline_name) last_run = pipeline.last_run validation_step = last_run.steps[step_name] validation_step.visualize() ``` This summary encapsulates the essential technical details and usage instructions for integrating Great Expectations with ZenML, enabling effective data quality checks in pipelines. ================================================== === File: docs/book/component-guide/data-validators/custom.md === ### Custom Data Validator Development in ZenML **Overview**: ZenML allows for the creation of custom Data Validators to integrate various data logging and validation libraries. However, the base abstraction for Data Validators is currently in progress, and extending them is not recommended until updates are complete. **Steps to Create a Custom Data Validator**: 1. **Class Inheritance**: Create a class that inherits from `BaseDataValidator` and override necessary abstract methods based on the library/service you want to integrate. 2. **Configuration Class**: If configuration is needed, create a class that inherits from `BaseDataValidatorConfig`. 3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to combine the validator and configuration classes. 4. **Pipeline Integration**: Optionally, provide standard steps for easy integration into pipelines. **Registration**: Register the custom Data Validator flavor using the CLI with the following command: ```shell zenml data-validator flavor register ``` For example: ```shell zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` **Best Practices**: Initialize ZenML at the root of your repository to ensure proper resolution of the flavor class. Use: ```shell zenml data-validator flavor list ``` to verify the registration. **Key Classes**: - **CustomDataValidatorFlavor**: Used upon creation of the custom flavor. - **CustomDataValidatorConfig**: Validates user-provided values during stack component registration. - **CustomDataValidator**: Engaged during the actual use of the component, allowing separation of configuration from implementation. This structure enables registration and component usage without requiring all dependencies to be installed locally. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === # Summary of SageMaker Step Operator Documentation ## Overview Amazon SageMaker provides specialized compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator enables the execution of individual pipeline steps on SageMaker compute instances. ## When to Use Use the SageMaker step operator if: - Your pipeline steps require resources (CPU, GPU, memory) not provided by your orchestrator. - You have access to SageMaker. ## Deployment Requirements 1. **IAM Role**: Create a role in the IAM console with `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. [Setup Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-create-execution-role). 2. **ZenML AWS Integration**: Install using: ```shell zenml integration install aws ``` 3. **Docker**: Must be installed and running. 4. **AWS Container Registry**: Required for your stack. [Setup Guide](../container-registries/aws.md#how-to-deploy-it). 5. **Remote Artifact Store**: Needed for reading/writing step artifacts. 6. **Instance Type**: Choose an instance type from the [available types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). 7. **Optional Experiment**: Group SageMaker runs. [Creation Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html). ## Authentication Methods ### 1. Service Connector (Recommended) Register an AWS Service Connector and connect it to your SageMaker step operator: ```shell zenml service-connector register --type aws -i zenml step-operator register --flavor=sagemaker --role= --instance_type= zenml step-operator connect --connector zenml stack register -s ... --set ``` ### 2. Implicit Authentication - **Local Orchestrator**: Uses the `default` profile in `~/.aws/config`. - **Remote Orchestrator**: Must authenticate to AWS and assume the specified IAM role. Example for implicit authentication: ```shell zenml step-operator register --flavor=sagemaker --role= --instance_type= zenml stack register -s ... --set python run.py # Authenticates with `default` profile ``` ## Using the SageMaker Step Operator To execute a step in SageMaker, specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ## Additional Configuration You can customize the SageMaker step operator with `SagemakerStepOperatorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for settings. ## Enabling CUDA for GPU For GPU usage, follow the [GPU training instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. ================================================== === File: docs/book/component-guide/step-operators/kubernetes.md === ### Kubernetes Step Operator Overview ZenML's Kubernetes step operator enables the execution of individual steps in Kubernetes pods, particularly useful when pipeline steps require additional computing resources not available from the orchestrator. #### When to Use - Steps require extra CPU, GPU, or memory resources. - Access to a Kubernetes cluster is available. #### Deployment Requirements 1. **Kubernetes Cluster**: Must be deployed using a cloud provider or custom infrastructure. 2. **ZenML Kubernetes Integration**: Install via: ```shell zenml integration install kubernetes ``` 3. **Docker or Remote Image Builder**: Required for building images. 4. **Remote Artifact Store**: Necessary for reading/writing artifacts. #### Recommended Setup - Set up a **Service Connector** for connecting to the Kubernetes cluster, especially for cloud-managed clusters (AWS, GCP, Azure). #### Registering the Step Operator 1. **Using Service Connector**: ```shell zenml step-operator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml step-operator connect --connector ``` 2. **Using `kubectl` Client**: ```shell zenml step-operator register --flavor=kubernetes --kubernetes_context= ``` #### Updating the Active Stack ```shell zenml stack update -s ``` #### Defining Steps Specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` #### Interacting with Pods Use `kubectl` for debugging. Pods are labeled with: - `run`: ZenML run name - `pipeline`: ZenML pipeline name To delete pods related to a specific pipeline: ```shell kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline ``` #### Additional Configuration Use `KubernetesStepOperatorSettings` for advanced configurations: - **Pod Settings**: Node selectors, labels, affinity, tolerations, image pull secrets. - **Service Account**: Specify the service account for pods. Example Configuration: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings kubernetes_settings = KubernetesStepOperatorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, "limits": {"cpu": "4", "memory": "8Gi"}, }, "service_account_name": "zenml-pipeline-runner" } ) @step(settings={"step_operator": kubernetes_settings}) def my_kubernetes_step(): ... ``` #### GPU Configuration To run steps on GPU, follow specific instructions to enable CUDA for full acceleration. For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.flavors.kubernetes_step_operator_flavor.KubernetesStepOperatorSettings) and the [Kubernetes step operator documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.step_operators.kubernetes_step_operator.KubernetesStepOperator). ================================================== === File: docs/book/component-guide/step-operators/modal.md === ### Modal Step Operator Overview **Modal** is a cloud platform designed for efficient code execution, particularly for tasks involving Docker image building and hardware provisioning. The **ZenML Modal step operator** allows users to run individual steps on Modal compute instances. #### When to Use Utilize the Modal step operator when you need: - Fast execution for resource-intensive steps (CPU, GPU, memory). - Precise hardware specifications for each step. - Access to Modal. #### Deployment Steps 1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). 2. **Install CLI**: Run: ```shell pip install modal modal setup ``` 3. **Requirements**: - ZenML `modal` integration: ```shell zenml integration install modal ``` - Docker installed and running. - A cloud artifact store and a cloud container registry in your stack. #### Registering the Step Operator To register the step operator, use: ```shell zenml step-operator register --flavor=modal zenml stack update -s ... ``` #### Using the Step Operator Specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML will create a Docker image for execution on Modal. #### Additional Configuration Define hardware requirements using `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml.integrations.modal.flavors import ModalStepOperatorSettings modal_settings = ModalStepOperatorSettings(gpu="A100") resource_settings = ResourceSettings(cpu=2, memory="32GB") @step( step_operator="modal", settings={ "step_operator": modal_settings, "resources": resource_settings } ) def my_modal_step(): ... ``` - The `cpu` parameter is a soft minimum limit; actual usage may exceed this. - Example cost calculation: 2 CPUs and 32GB memory would cost approximately $1.03/hour. This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu). **Note**: Some settings (region, cloud provider) are exclusive to Modal Enterprise and Team plans. It's advisable to use broader settings to prevent execution failures, with detailed error messages provided by Modal for troubleshooting. For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/spark-kubernetes.md === ### Summary of Spark Step Operators in ZenML #### Overview The `spark` integration in ZenML provides two key step operators for executing tasks on Spark: 1. **SparkStepOperator**: Base class for all Spark-related step operators. 2. **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications on a Kubernetes cluster. #### SparkStepOperator Configuration The configuration for `SparkStepOperator` includes: - **master**: URL for the Spark cluster (supports Kubernetes, Mesos, YARN). - **deploy_mode**: Can be 'cluster' (default) or 'client', determining where the driver node runs. - **submit_kwargs**: JSON string for additional Spark parameters. **Code Example:** ```python class SparkStepOperatorConfig(BaseStepOperatorConfig): master: str deploy_mode: str = "cluster" submit_kwargs: Optional[Dict[str, Any]] = None ``` #### Implementation The `SparkStepOperator` includes methods for configuring resources, backends, I/O, and launching Spark jobs: - `_resource_configuration`: Maps ZenML resource settings to Spark. - `_backend_configuration`: Configures Spark for specific cluster managers. - `_io_configuration`: Sets up input/output sources. - `_additional_configuration`: Appends user-defined parameters. - `_launch_spark_job`: Executes the Spark job using `spark-submit`. **Code Example:** ```python class SparkStepOperator(BaseStepOperator): def launch(self, info: "StepRunInfo", entrypoint_command: List[str]) -> None: """Launches the step on Spark.""" ``` #### KubernetesSparkStepOperator This operator extends `SparkStepOperator` for Kubernetes, adding: - **namespace**: Kubernetes namespace for driver and executor pods. - **service_account**: Service account for Spark components. **Code Example:** ```python class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): namespace: Optional[str] = None service_account: Optional[str] = None ``` The `_backend_configuration` method is tailored for Kubernetes, building and pushing Docker images. #### Usage Use the Spark step operator for large data processing and distributed computing. To deploy `KubernetesSparkStepOperator`, set up: - A remote ZenML server. - A Kubernetes cluster (e.g., AWS EKS). **EKS Setup Steps:** 1. Create IAM roles for EKS. 2. Set up the EKS cluster. 3. Create a Docker image for Spark drivers and executors using the `docker-image-tool`. **RBAC Configuration:** Create a `rbac.yaml` file for Kubernetes permissions and apply it. **Code Example:** ```yaml apiVersion: v1 kind: Namespace metadata: name: spark-namespace --- apiVersion: v1 kind: ServiceAccount metadata: name: spark-service-account namespace: spark-namespace ``` #### Registering the Step Operator To use the `KubernetesSparkStepOperator`, install the Spark integration and register the operator: ```bash zenml integration install spark zenml step-operator register spark_step_operator \ --flavor=spark-kubernetes \ --master=k8s://$EKS_API_SERVER_ENDPOINT \ --namespace= \ --service_account= ``` #### Running Steps Define steps using the `@step` decorator: ```python @step(step_operator=) def step_on_spark(...) -> ...: ... ``` #### Additional Configuration For more configurations, refer to the `SparkStepOperatorSettings` documentation. This summary encapsulates the essential details and code snippets necessary for understanding and utilizing the Spark step operators in ZenML. ================================================== === File: docs/book/component-guide/step-operators/azureml.md === ### Summary: Executing Individual Steps in AzureML with ZenML **Overview**: ZenML integrates with AzureML to run training jobs on specialized compute instances. The AzureML step operator allows submission of individual pipeline steps to AzureML. #### When to Use AzureML Step Operator - If pipeline steps require compute resources not provided by your orchestrator. - If you have access to AzureML (for other cloud providers, consider SageMaker or Vertex). #### Deployment Steps 1. Create an Azure Machine Learning workspace, including a container registry and storage account. 2. (Optional) Create a compute instance or cluster in AzureML. 3. (Optional) Set up a Service Principal for authentication if using a service connector. #### Requirements - Install ZenML Azure integration: ```shell zenml integration install azure ``` - Ensure Docker is installed and running. - Set up an Azure container registry and artifact store. - Create an AzureML workspace. #### Authentication Methods 1. **Service Connector** (recommended): - Register a service connector and connect it to the AzureML step operator. ```shell zenml service-connector register --type azure -i zenml step-operator register --flavor=azureml --subscription_id= --resource_group= --workspace_name= zenml step-operator connect --connector zenml stack register -s ... --set ``` 2. **Implicit Authentication**: - For local orchestrators, ZenML uses Azure CLI for authentication. - For remote orchestrators, ensure they can authenticate to Azure. #### Using the AzureML Step Operator To execute a step in AzureML, specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image for the step execution. #### Configuration Use `AzureMLStepOperatorSettings` to configure compute resources: - **Serverless Compute**: Default mode. - **Compute Instance**: Requires `compute_name`. - **Compute Cluster**: Also requires `compute_name`. Example configuration for a compute instance: ```python from zenml.integrations.azure.flavors import AzureMLStepOperatorSettings azureml_settings = AzureMLStepOperatorSettings( mode="compute-instance", compute_name="MyComputeInstance", compute_size="Standard_NC6s_v3", ) @step(settings={"step_operator": azureml_settings}) def my_azureml_step(): # YOUR STEP CODE ... ``` #### GPU Support To run steps on GPU, follow additional customization instructions to enable CUDA for full acceleration. For more details, refer to the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.flavors.azureml_step_operator_flavor.AzureMLStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/step-operators.md === ### Step Operators Overview **Purpose**: The step operator allows execution of individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like Spark. **Comparison to Orchestrators**: While the orchestrator is essential for executing all pipeline steps in order and managing scheduling, the step operator is used for executing specific steps in environments that the orchestrator cannot provide. ### When to Use Step Operators Use a step operator when pipeline steps need resources unavailable in the orchestrator's runtime environment. For example, if a step requires GPU resources for training a computer vision model but the orchestrator runs on a non-GPU Kubernetes cluster, a step operator like SageMaker, Vertex, or AzureML should be used. ### Available Step Operator Flavors ZenML provides the following step operators for major cloud providers: | Step Operator | Flavor | Integration | Notes | |----------------|--------------|-------------|-------------------------------------| | AzureML | `azureml` | `azure` | Executes steps using AzureML | | Kubernetes | `kubernetes`| `kubernetes`| Executes steps using Kubernetes Pods| | Modal | `modal` | `modal` | Executes steps using Modal | | SageMaker | `sagemaker` | `aws` | Executes steps using SageMaker | | Spark | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | | Vertex | `vertex` | `gcp` | Executes steps using Vertex AI | | Custom | _custom_ | | Allows for custom implementation | To view available flavors, use: ```shell zenml step-operator flavor list ``` ### How to Use Step Operators You do not need to interact directly with ZenML step operators in your code. Simply specify the desired step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def my_step(...) -> ...: ... ``` #### Specifying Resources For steps requiring additional hardware resources, specify them accordingly. For GPU usage, follow the instructions to enable CUDA for full acceleration. ### Important Notes - Ensure to follow specific guidelines for GPU-backed hardware to enable CUDA. - Refer to the documentation for details on specifying resources and configurations. ================================================== === File: docs/book/component-guide/step-operators/vertex.md === ### Summary: Executing Steps in Vertex AI with ZenML **Overview**: Vertex AI provides specialized compute instances for training jobs and a UI for model management. ZenML's Vertex AI step operator allows submission of individual steps to Vertex AI. #### When to Use - Use the Vertex step operator if: - Your pipeline steps require resources (CPU, GPU, memory) not available from your orchestrator. - You have access to Vertex AI. #### Deployment Steps 1. **Enable Vertex AI**: [Enable here](https://console.cloud.google.com/vertex-ai). 2. **Create a Service Account**: Assign `roles/aiplatform.admin` and `roles/storage.admin`. #### Usage Requirements - Install ZenML GCP integration: ```shell zenml integration install gcp ``` - Ensure Docker is installed and running. - Enable Vertex AI and have a service account file. - Set up a GCR container registry. - Optionally specify a machine type (default: `n1-standard-4`). - Configure a remote artifact store for shared access to step artifacts. #### Authentication Methods 1. **Using gcloud CLI**: ```shell gcloud auth login zenml step-operator register --flavor=vertex --project= --region= ``` 2. **Service Account Key File**: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account_path= ``` 3. **GCP Service Connector (recommended)**: ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ zenml step-operator register --flavor=vertex --region= zenml step-operator connect --connector ``` #### Update Active Stack Add the step operator to your active stack: ```shell zenml stack update -s ``` #### Define Steps Use the registered step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` #### Additional Configuration Specify service account, network, and reserved IP ranges: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account= --network= --reserved_ip_ranges= ``` #### Custom Settings Pass `VertexStepOperatorSettings` for further customization: ```python from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, machine_type="n1-standard-2", disk_type="pd-ssd", disk_size_gb=100, )}) def trainer(...) -> ...: """Train a model.""" ``` #### GPU Configuration For GPU usage, follow the instructions to enable CUDA for full acceleration. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.flavors.vertex_step_operator_flavor.VertexStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/custom.md === # Custom Step Operator Development in ZenML ## Overview This documentation provides guidance on developing a custom step operator in ZenML. It is recommended to first review the general guide on writing custom component flavors in ZenML for foundational knowledge. ## Base Abstraction The `BaseStepOperator` is the abstract class for implementing step operators, which run pipeline steps in separate environments. Key components include: ```python from abc import ABC, abstractmethod from typing import List, Type from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig, Flavor from zenml.config.step_run_info import StepRunInfo class BaseStepOperatorConfig(StackComponentConfig): """Base config for step operators.""" class BaseStepOperator(StackComponent, ABC): """Base class for all ZenML step operators.""" @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: """Executes a step with the given command.""" ``` ## Creating a Custom Step Operator To create a custom flavor for a step operator, follow these steps: 1. **Subclass `BaseStepOperator`**: Implement the `launch` method, which prepares the execution environment and runs the entrypoint command. 2. **Handle Resources**: If applicable, manage resources defined in `info.config.resource_settings`. 3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for any custom parameters. 4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, providing a name for the flavor. ### Registering the Flavor Register your custom flavor using the CLI: ```shell zenml step-operator flavor register ``` For example: ```shell zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor ``` ### Listing Available Flavors After registration, verify the new flavor: ```shell zenml step-operator flavor list ``` ## Important Considerations - The `CustomStepOperatorFlavor` is used during flavor creation. - The `CustomStepOperatorConfig` is utilized for validating user input during registration. - The `CustomStepOperator` is engaged when the component is executed, allowing for separation of configuration and implementation. ## Enabling GPU Support To run steps on GPU, follow the instructions to enable CUDA for full acceleration. This involves additional settings customization. For further details, refer to the complete SDK documentation and relevant guides. ================================================== === File: docs/book/component-guide/alerters/slack.md === ### Slack Alerter Documentation Summary The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. #### Setup Instructions 1. **Create a Slack App**: - Set up a Slack workspace and create a Slack App with a bot. - Grant the following permissions in the `OAuth & Permissions` tab: - `chat:write` - `channels:read` - `channels:history` - Invite the app to your desired channel using `/invite` or channel settings. 2. **Register Slack Alerter in ZenML**: - Install the Slack integration: ```shell zenml integration install slack -y ``` - Create a secret and register the alerter: ```shell zenml secret create slack_token --oauth_token= zenml alerter register slack_alerter \ --flavor=slack \ --slack_token={{slack_token.oauth_token}} \ --slack_channel_id= ``` #### Usage - **Direct Methods**: Use `post()` and `ask()` methods from the active alerter: ```python from zenml import pipeline, step from zenml.client import Client @step def post_statement() -> None: Client().active_stack.alerter.post("Step finished!") @step def ask_question() -> bool: return Client().active_stack.alerter.ask("Should I continue?") @pipeline(enable_cache=False) def my_pipeline(): post_statement() ask_question() if __name__ == "__main__": my_pipeline() ``` - **Custom Settings**: You can specify a different channel ID during runtime: ```python @step(settings={"alerter": {"slack_channel_id": }}) def post_statement() -> None: Client().active_stack.alerter.post("Posting to another channel!") ``` - **Using `SlackAlerterParameters` and `SlackAlerterPayload`**: Customize messages with additional information: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client from zenml.integrations.slack.alerters.slack_alerter import ( SlackAlerterParameters, SlackAlerterPayload ) @step def post_statement() -> None: params = SlackAlerterParameters( payload=SlackAlerterPayload( pipeline_name=get_step_context().pipeline.name, step_name=get_step_context().step_run.name, stack_name=Client().active_stack.name, ), ) Client().active_stack.alerter.post( message="This is a message with additional information about your pipeline.", params=params ) ``` - **Predefined Steps**: Use built-in steps for simplicity: ```python from zenml import pipeline from zenml.integrations.slack.steps.slack_alerter_post_step import slack_alerter_post_step from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_ask_step @pipeline(enable_cache=False) def my_pipeline(): slack_alerter_post_step("Posting a statement.") slack_alerter_ask_step("Asking a question. Should I continue?") if __name__ == "__main__": my_pipeline() ``` For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). ================================================== === File: docs/book/component-guide/alerters/alerters.md === # Alerters in ZenML **Alerters** enable automated message sending to chat services (e.g., Slack, Discord) from ZenML pipelines, facilitating immediate notifications for failures and monitoring. ## Available Alerter Flavors Currently supported alerters: - **SlackAlerter**: Integrates with Slack channels. - **DiscordAlerter**: Integrates with Discord channels. - **Custom Implementation**: Allows building custom alerters for other chat services. | Alerter | Flavor | Integration | Notes | |---------|---------|-------------|-------------------------------------------| | Slack | `slack` | `slack` | Interacts with a Slack channel | | Discord | `discord`| `discord` | Interacts with a Discord channel | | Custom | _custom_| | Extend the alerter abstraction | To view available alerter flavors, use: ```shell zenml alerter flavor list ``` ## Usage 1. **Register an Alerter**: ```shell zenml alerter register ... ``` 2. **Add to Stack**: ```shell zenml stack register ... -al ``` 3. **Import and Use**: Import standard steps from the alerter integration and utilize them in your pipelines. For more details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/alerters/discord.md === ### Discord Alerter Documentation Summary The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two main steps: 1. **`discord_alerter_post_step`**: Posts a message and returns success status. 2. **`discord_alerter_ask_step`**: Posts a message and waits for user feedback, returning `True` only if the user approves. #### Use Cases - Immediate notifications for failures (e.g., model performance issues). - Human-in-the-loop integration for critical steps (e.g., model deployments). ### Requirements To use the `DiscordAlerter`, install the Discord integration: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot 1. Create a Discord workspace and channel. 2. Create a Discord App with a bot and obtain the ``. 3. Ensure the bot has permissions to send and receive messages. ### Registering a Discord Alerter Register the `discord` alerter with the following command: ```shell zenml alerter register discord_alerter \ --flavor=discord \ --discord_token= \ --default_discord_channel_id= ``` Add it to your stack: ```shell zenml stack register ... -al discord_alerter ``` ### Using the Discord Alerter Import the steps and use them in your pipeline. A formatter step is typically needed to generate the message. Example usage: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @step def my_formatter_step(artifact) -> str: return f"Here is my artifact {artifact}!" @pipeline def my_pipeline(...): ... message = my_formatter_step(artifact_to_be_communicated) approved = discord_alerter_ask_step(message) ... # Conditional behavior based on `approved` if __name__ == "__main__": my_pipeline() ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). ================================================== === File: docs/book/component-guide/alerters/custom.md === ### Custom Alerter Development in ZenML This documentation outlines the process for creating a custom alerter in ZenML, which involves implementing specific methods and configuring the alerter. #### Base Abstraction The base class for alerters, `BaseAlerter`, defines two abstract methods: - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service, returning `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. ```python class BaseAlerter(StackComponent, ABC): def post(self, message: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True def ask(self, question: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True ``` #### Steps to Create a Custom Alerter 1. **Inherit from BaseAlerter**: Implement the `post()` and `ask()` methods in your custom class. ```python class MyAlerter(BaseAlerter): def post(self, message: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return "Hey, I implemented an alerter." def ask(self, question: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return True ``` 2. **Create a Configuration Class** (optional): Define parameters for your alerter. ```python class MyAlerterConfig(BaseAlerterConfig): my_param: str ``` 3. **Define a Flavor Class**: Combine the implementation and configuration. ```python class MyAlerterFlavor(BaseAlerterFlavor): @property def name(self) -> str: return "my_alerter" @property def config_class(self) -> Type[StackComponentConfig]: from my_alerter_config import MyAlerterConfig return MyAlerterConfig @property def implementation_class(self) -> Type[StackComponent]: from my_alerter import MyAlerter return MyAlerter ``` #### Registering the Custom Alerter Register your new flavor using the CLI: ```shell zenml alerter flavor register ``` For example: ```shell zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - After registration, list available alerter flavors: ```shell zenml alerter flavor list ``` #### Workflow Integration - The `MyAlerterFlavor` is used during flavor creation. - The `MyAlerterConfig` is utilized for validating user input during stack component registration. - The `MyAlerter` class is invoked when the component is in use, allowing for separation of configuration and implementation. This structure supports modular development and enables the registration of flavors and components independently of their dependencies. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === # Azure Blob Storage with ZenML The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. ## When to Use Azure Artifact Store - **Team Collaboration**: Share pipeline results with team members or stakeholders. - **Remote Components**: Integrate with remote orchestrators (e.g., Kubeflow, Kubernetes). - **Storage Limitations**: Overcome local storage constraints. - **Production Needs**: Handle large-scale pipelines. ## Deployment Steps 1. **Install Azure Integration**: ```shell zenml integration install azure -y ``` 2. **Register Azure Artifact Store**: - The root path URI must point to an Azure Blob Storage container in the format `az://container-name` or `abfs://container-name`. - Example registration: ```shell zenml artifact-store register az_store -f azure --path=az://container-name zenml stack register custom_stack -a az_store ... --set ``` ## Authentication Methods - **Implicit Authentication**: Quick local setup using environment variables. - **Azure Service Connector**: Recommended for better security and integration with other Azure components. ### Implicit Authentication Setup Set environment variables: - For account key: ```shell export AZURE_STORAGE_ACCOUNT_NAME= export AZURE_STORAGE_ACCOUNT_KEY= ``` - For service principal: ```shell export AZURE_STORAGE_CLIENT_ID= export AZURE_STORAGE_CLIENT_SECRET= export AZURE_STORAGE_TENANT_ID= ``` ### Azure Service Connector Setup Register a service connector: ```shell zenml service-connector register --type azure -i ``` Non-interactive example: ```shell zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id ``` ### Connect Artifact Store to Service Connector ```shell zenml artifact-store connect -i ``` Non-interactive version: ```shell zenml artifact-store connect --connector ``` ## Using ZenML Secrets for Authentication Create a ZenML secret to store Azure credentials: ```shell zenml secret create az_secret --account_name='' --account_key='' ``` Register the artifact store using the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret ``` ## Usage Once set up, the Azure Artifact Store functions like any other ZenML artifact store. For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === ### Summary of Storing Artifacts in AWS S3 Bucket with ZenML #### Overview The S3 Artifact Store is an integration in ZenML that utilizes AWS S3 or compatible services (like MinIO or Ceph RGW) for artifact storage. It is ideal for projects requiring shared access, remote components, or scalable storage solutions. #### Use Cases Consider the S3 Artifact Store when: - Sharing pipeline results with team members. - Integrating with remote orchestration tools (e.g., Kubeflow). - Needing more storage than local machines can provide. - Running production-grade MLOps pipelines. #### Deployment Steps 1. **Install S3 Integration**: ```shell zenml integration install s3 -y ``` 2. **Register S3 Artifact Store**: - The mandatory configuration is the S3 bucket URI: `s3://bucket-name`. - Example registration: ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name zenml stack register custom_stack -a s3_store ... --set ``` 3. **Authentication**: - **Implicit Authentication**: Quick setup using local AWS CLI credentials. Requires AWS CLI installed. - **AWS Service Connector** (recommended): Provides better security and access management. ```shell zenml service-connector register --type aws -i zenml service-connector register --type aws --resource-type s3-bucket --resource-name --auto-configure ``` 4. **Connect Artifact Store to AWS Service Connector**: ```shell zenml artifact-store connect -i ``` 5. **Using ZenML Secrets**: - Store AWS credentials in a ZenML secret for better management: ```shell zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret ``` #### Advanced Configuration You can customize the S3 Artifact Store with advanced options: - `client_kwargs`: Pass parameters like `endpoint_url` and `region_name`. - `config_kwargs`: Advanced parameters for client configuration. - `s3_additional_kwargs`: Parameters for S3 API calls (e.g., `ServerSideEncryption`). Example of advanced registration: ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` #### Usage Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For further details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/local.md === # Local Artifact Store in ZenML The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that utilizes a folder on your local filesystem for storing artifacts. ### Use Cases - Ideal for beginners or evaluations, as it requires no additional resources or managed services (e.g., Amazon S3, Google Cloud Storage). - Suitable for experimental phases where sharing artifacts is unnecessary. ### Limitations - Not intended for production use; artifacts cannot be shared across teams or accessed from other machines. - Lacks features like high availability, scalability, and backup. - Compatible only with local components: - **Orchestrators**: Local Orchestrator, Local Kubeflow, Local Kubernetes. - **Model Deployers**: Local Model Deployers (e.g., MLflow). - **Step Operators**: Not compatible due to their remote execution nature. Transitioning to a team or production setting requires replacing the Local Artifact Store with a more suitable flavor without code changes. ### Deployment The default ZenML stack includes a Local Artifact Store: ```shell $ zenml stack list $ zenml artifact-store describe ``` Artifacts are stored in a local folder, as indicated by the `PATH` in the output. You can create additional Local Artifact Stores: ```shell # Register the local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set ``` **Note**: The Local Artifact Store accepts a `path` parameter during registration, but using the default path is recommended to avoid issues with local stack components. For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). ### Usage Using the Local Artifact Store is similar to other Artifact Store flavors, with the main difference being local storage. ================================================== === File: docs/book/component-guide/artifact-stores/gcp.md === ### Google Cloud Storage (GCS) Artifact Store in ZenML The GCS Artifact Store is a component of the GCP ZenML integration that uses Google Cloud Storage to store ZenML artifacts. It is suitable for projects needing shared storage, remote components, or production-grade MLOps. #### When to Use GCS Artifact Store - **Team Collaboration**: Share pipeline results with team members or stakeholders. - **Remote Components**: Integrate with remote orchestrators like Kubeflow or Kubernetes. - **Storage Limitations**: Overcome local storage constraints. - **Scalability**: Handle production-scale pipeline demands. #### Deployment Steps 1. **Install GCP Integration**: ```shell zenml integration install gcp -y ``` 2. **Register GCS Artifact Store**: - **URI Format**: `gs://bucket-name` - **Command**: ```shell zenml artifact-store register gs_store -f gcp --path=gs://bucket-name zenml stack register custom_stack -a gs_store ... --set ``` #### Authentication Authentication is necessary for GCS Artifact Store integration: - **Implicit Authentication**: Quick setup using local GCP CLI credentials. Requires Google Cloud CLI installed. - **GCP Service Connector (Recommended)**: Provides better security and configuration management. Register using: ```shell zenml service-connector register --type gcp -i ``` Or for a specific bucket: ```shell zenml service-connector register --type gcp --resource-type gcs-bucket --resource-name --auto-configure ``` #### Connecting GCS Artifact Store After setting up authentication, connect the GCS Artifact Store: ```shell zenml artifact-store connect -i ``` Or non-interactively: ```shell zenml artifact-store connect --connector ``` #### Using GCS Artifact Store Once registered and connected, use the GCS Artifact Store in your ZenML Stack: ```shell zenml stack register -a ... --set ``` #### GCP Credentials For enhanced security, create a GCP Service Account Key and store it in a ZenML Secret: ```shell zenml secret create gcp_secret --token=@path/to/service_account_key.json zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret ``` #### Additional Resources For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). Using the GCS Artifact Store is similar to other Artifact Store flavors, enabling seamless integration into ZenML pipelines. ================================================== === File: docs/book/component-guide/artifact-stores/artifact-stores.md === # Artifact Stores in ZenML ## Overview The Artifact Store is a critical component of the ZenML MLOps stack, serving as a data persistence layer for artifacts generated by machine learning pipelines, such as datasets and models. ZenML automatically serializes and saves these artifacts, enabling features like caching, provenance tracking, and reproducibility. ## Key Points - **Artifact Storage**: Artifacts are stored based on the implementation of the associated **Materializer**, which handles serialization and deserialization. - **Custom Storage**: Users can create custom Materializers or extend the Artifact Store abstraction to support different storage backends. - **Stack Component**: The Artifact Store must be registered as part of your ZenML stack. ## Artifact Store Flavors ZenML provides several built-in Artifact Store flavors: | Artifact Store | Flavor | Integration | URI Schema(s) | Notes | |----------------|--------|-------------|----------------|-------| | Local | `local`| _built-in_ | None | Default store for local filesystem. | | Amazon S3 | `s3` | `s3` | `s3://` | Uses AWS S3 for storage. | | Google Cloud | `gcp` | `gcp` | `gs://` | Uses Google Cloud Storage. | | Azure | `azure`| `azure` | `abfs://`, `az://` | Uses Azure Blob Storage. | | Custom | _custom_| | _custom_ | User-defined implementation. | To list available flavors: ```shell zenml artifact-store flavor list ``` ## Configuration Each Artifact Store requires a `path` attribute, which is a URI pointing to the root storage location. For example, to register an S3 store: ```shell zenml artifact-store register s3_store -f s3 --path s3://my_bucket ``` ## Usage The Artifact Store provides low-level object storage services but can often be used indirectly through higher-level APIs. Key functionalities include: - Automatically saving pipeline artifacts by returning objects from pipeline steps. - Retrieving artifacts after pipeline runs. ### Low-Level API The Artifact Store API mimics standard file system operations. Access can be done through: - `zenml.io.fileio`: For operations like `open`, `copy`, `rename`, etc. - `zenml.utils.io_utils`: For higher-level utilities to transfer objects between the Artifact Store and local storage. ### Example Code **Writing to the Artifact Store:** ```python import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_contents = "example artifact" artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") fileio.makedirs(os.path.dirname(artifact_uri)) with fileio.open(artifact_uri, "w") as f: f.write(artifact_contents) ``` **Reading from the Artifact Store:** ```python from zenml.client import Client from zenml.utils import io_utils root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) ``` **Using Temporary Files:** ```python import os import tempfile from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: # Save to temporary file and copy to artifact store fileio.copy(f.name, artifact_uri) ``` ## Conclusion The Artifact Store is essential for managing artifacts in ZenML, providing flexibility for various storage solutions and seamless integration with pipeline operations. ================================================== === File: docs/book/component-guide/artifact-stores/custom.md === ### Summary: Developing a Custom Artifact Store in ZenML ZenML provides built-in Artifact Store implementations for local and cloud storage. To create a custom Artifact Store, follow these steps: 1. **Familiarize with ZenML Components**: Review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) to understand component flavor concepts. 2. **Base Abstraction**: The `BaseArtifactStore` class is central to the ZenML stack. Key points include: - **Configuration**: Requires a `path` parameter for the artifact store's root directory. - **Supported Schemes**: Each subclass must define `SUPPORTED_SCHEMES` for file path schemes (e.g., `{"abfs://", "az://"}` for Azure). - **Abstract Methods**: Implement the following methods in subclasses: `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. Example implementation: ```python from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig from typing import Any, List, Set, Type, Union PathType = Union[bytes, str] class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: Set[str] class BaseArtifactStore(StackComponent): @abstractmethod def open(self, name: PathType, mode: str = "r") -> Any: pass @abstractmethod def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: pass # Other abstract methods... ``` 3. **Custom Implementation**: - Inherit from `BaseArtifactStore` and implement the abstract methods. - Inherit from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. - Combine both by inheriting from `BaseArtifactStoreFlavor`. 4. **Registering the Custom Store**: Use the CLI to register your custom flavor: ```shell zenml artifact-store flavor register ``` Example: ```shell zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor ``` 5. **Using the Custom Store**: Once registered, it will be available in the list of flavors: ```shell zenml artifact-store flavor list ``` 6. **Workflow Integration**: - The `CustomArtifactStoreFlavor` is used during flavor creation. - The `CustomArtifactStoreConfig` validates user inputs during stack registration. - The `CustomArtifactStore` is utilized when the component is in use. 7. **Artifact Visualizations**: Ensure your custom store can authenticate to the backend without local dependencies. Install necessary package dependencies in the deployment environment for visualization support. For complete implementation details and additional documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). ================================================== === File: docs/book/component-guide/feature-stores/feature-stores.md === ### Feature Stores Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between the two. They provide a centralized registry for features and feature schemas, catering to different access needs for batch and real-time data, thereby addressing the issue of train-serve skew. #### When to Use Feature stores are optional in the ZenML Stack and are used to: - Productionalize new features - Reuse existing features across pipelines and models - Ensure consistency between training and serving data - Maintain a central registry of features and schemas #### Available Feature Stores ZenML integrates with various feature stores, notably: | Feature Store | Flavor | Integration | Notes | |-----------------------------|---------|-------------|--------------------------------------------| | [FeastFeatureStore](feast.md) | `feast` | `feast` | Connects ZenML with existing Feast | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom feature store implementations | To view available feature store flavors, use: ```shell zenml feature-store flavor list ``` #### How to Use The feature store implementation is based on the Feast integration. For detailed usage, refer to the [Feast documentation](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === ### Summary of Managing Data in Feast Feature Stores **Feast Overview** Feast (Feature Store) is designed for managing and serving machine learning features to production models, supporting both low-latency online and offline batch data access. **Use Cases** - **Training:** Access offline/batch data for model training. - **Inference:** Access online data for real-time predictions. **Deployment** To integrate Feast with ZenML, ensure you have a Feast feature store set up. Install the Feast integration with: ```shell zenml integration install feast ``` Register the feature store as a ZenML stack component: ```shell zenml feature-store register feast_store --flavor=feast --feast_repo="" zenml stack register ... -f feast_store ``` **Usage** Currently, online data retrieval is supported in local settings but not in production deployments. To get historical features from a registered feature store, create a step as follows: ```python from datetime import datetime import pandas as pd from zenml import step from zenml.client import Client @step def get_historical_features(entity_dict, features, full_feature_names=False) -> pd.DataFrame: feature_store = Client().active_stack.feature_store if not feature_store: raise DoesNotExistException("Feast feature store component is not available.") entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] entity_df = pd.DataFrame.from_dict(entity_dict) return feature_store.get_historical_features(entity_df=entity_df, features=features, full_feature_names=full_feature_names) entity_dict = { "driver_id": [1001, 1002, 1003], "event_timestamp": [ datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), datetime(2021, 4, 12, 16, 40, 26).isoformat(), ], } features = [ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", ] @pipeline def my_pipeline(): my_features = get_historical_features(entity_dict, features) ... ``` **Important Notes** - ZenML uses Pydantic for input serialization, limiting it to basic data types. DataFrames and datetime values require conversion. - For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== === File: docs/book/component-guide/feature-stores/custom.md === ### Summary: Developing a Custom Feature Store in ZenML **Overview**: Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between them. They also provide a centralized registry for features and feature schemas for team or organizational use. **Important Notes**: - Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. - The base abstraction for feature stores is currently under development, limiting the ability to extend them. Check the list of available feature stores for immediate use. **Warning**: This documentation is based on an older version of ZenML. For the latest information, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/annotators/annotators.md === # Annotators in ZenML ## Overview Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They enable users to launch annotation tasks, configure datasets, and track labeled tasks via CLI commands. Data annotation is essential in MLOps, and ZenML aims to support iterative workflows that integrate annotators into the ML process. ## Annotation Lifecycle Data annotation can occur at various stages in the ML lifecycle: - **At the Start**: Begin labeling data to bootstrap models, iterating by using model predictions to suggest labels. - **As New Data Arrives**: Regularly check and label new data to maintain model accuracy and address data drift. - **Inference Samples**: Store and label predictions from the model for comparison and potential retraining. - **Ad Hoc Interventions**: Identify and correct bad labels or address class imbalances through targeted annotation. ## Usage The annotator is an optional component in the ZenML stack, designed to integrate with training and deployment phases. Key features include: - Seamless integration of labels in training steps. - Versioning of annotation data. - Conversion of annotation data to/from custom formats. - Generation of UI config files for annotation interfaces. ## Available Annotators ZenML supports various annotators through integrations: | Annotator | Flavor | Integration | Notes | |---------------------------|----------------|-------------------|--------------------------------------------| | [ArgillaAnnotator](argilla.md) | `argilla` | `argilla` | Connect ZenML with Argilla | | [LabelStudioAnnotator](label-studio.md) | `label_studio` | `label_studio` | Connect ZenML with Label Studio | | [PigeonAnnotator](pigeon.md) | `pigeon` | `pigeon` | Notebook only; for image/text classification | | [ProdigyAnnotator](prodigy.md) | `prodigy` | `prodigy` | Connect ZenML with [Prodigy](https://prodi.gy/) | | [Custom Implementation](custom.md) | _custom_ | | Extend the annotator abstraction | To view available annotator flavors, use: ```shell zenml annotator flavor list ``` ## Implementation The annotator implementation is primarily based on the Label Studio integration. For usage details, refer to the [Label Studio page](label-studio.md#how-do-you-use-it). Note that Pigeon is limited to Jupyter notebooks. ## Naming Conventions ZenML standardizes terminology for its components: - **Project vs. Dataset**: Label Studio uses 'Project'; ZenML uses 'Dataset'. - **Tasks**: The combination of an annotation and source data is referred to as 'tasks' in ZenML. This documentation provides a concise overview of the annotators in ZenML, their lifecycle, usage, available integrations, and naming conventions. ================================================== === File: docs/book/component-guide/annotators/prodigy.md === ### Prodigy Integration with ZenML **Prodigy** is a paid annotation tool for creating training and evaluation data for machine learning models. It allows for data inspection, cleaning, error analysis, and developing rule-based systems. The Prodigy Python library offers pre-built workflows and customizable scripts for data loading, annotation interface questions, and front-end behavior. #### When to Use Prodigy Consider using Prodigy when you need to label data as part of your ML workflow by adding it as an optional annotator stack component in ZenML. #### Deployment Steps 1. **Install Prodigy**: Requires a license. Follow the [Prodigy installation guide](https://prodi.gy/docs/install). Ensure `urllib3<2` is installed. 2. **Register Prodigy with ZenML**: ```shell zenml integration export-requirements --output-file prodigy-requirements.txt prodigy zenml annotator register prodigy --flavor prodigy ``` Optionally, use `--custom_config_path=""` to override default settings. 3. **Set Up the Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` Verify with: ```shell zenml annotator dataset list ``` #### Usage Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy). Access and annotate datasets with: ```shell zenml annotator dataset annotate ``` Example command: ```shell zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" ``` #### Importing Annotations in ZenML To import annotations within a ZenML step: ```python from typing import List, Dict, Any from zenml import step from zenml.client import Client @step def import_annotations() -> List[Dict[str, Any]]: zenml_client = Client() annotations = zenml_client.active_stack.annotator.get_labeled_data(dataset_name="my_dataset") return annotations ``` For cloud environments, manually export annotations and store them for later use in ZenML. #### Prodigy Annotator Stack Component The Prodigy annotator component extends the `BaseAnnotator` class, implementing core methods for dataset registration and annotation export. It includes additional methods specific to Prodigy for enhanced functionality. ================================================== === File: docs/book/component-guide/annotators/label-studio.md === ### Summary of Label Studio Integration with ZenML **Label Studio Overview** Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types including: - **Computer Vision**: image classification, object detection, semantic segmentation - **Audio & Speech**: classification, speaker diarization, emotion recognition, transcription - **Text/NLP**: classification, NER, question answering, sentiment analysis - **Time Series**: classification, segmentation, event recognition - **Multi-Modal**: dialogue processing, OCR, time series with reference **Usage Context** Integrate Label Studio into your ZenML stack for data labeling during ML workflows. It supports AWS S3, GCP/GCS, and Azure Blob Storage, but not purely local stacks. **Deployment Steps** 1. **Install the Integration**: ```shell zenml integration install label_studio ``` 2. **Set Up Label Studio**: - Clone and run Label Studio locally: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` - Access the web interface at [http://localhost:8080/](http://localhost:8080/) to obtain your API key. 3. **Register API Key**: ```shell zenml secret create label_studio_secrets --api_key="" ``` 4. **Register Annotator**: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` 5. **Configure Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -a zenml stack update annotation -an zenml stack set annotation ``` **Usage** Use CLI commands to interact with datasets: - List datasets: ```shell zenml annotator dataset list ``` - Annotate a dataset: ```shell zenml annotator dataset annotate ``` **Key Components** - **Label Studio Annotator**: Inherits from `BaseAnnotator`, includes methods for dataset registration, annotation export, and daemon process management. - **Standard Steps**: - `LabelStudioDatasetRegistrationConfig`: For dataset registration. - `LabelStudioDatasetSyncConfig`: For syncing new data. - `get_or_create_dataset`: Registers or retrieves a dataset. - `get_labeled_data`: Retrieves labeled data. - `sync_new_data_to_label_studio`: Ensures data synchronization. **Helper Functions** ZenML provides functions to generate 'label config' strings for object detection, image classification, and OCR. Refer to the `label_config_generators` module for implementation details. For more information, refer to the [Hugging Face deployment documentation](https://huggingface.co/docs/hub/spaces-sdks-docker-label-studio) and the [Label Studio guide](https://labelstud.io/guide/tasks.html). ================================================== === File: docs/book/component-guide/annotators/argilla.md === ### Summary: Annotating Data Using Argilla **Argilla Overview** Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It facilitates robust language model development through efficient data curation, leveraging both human and machine feedback throughout the MLOps cycle, from data labeling to model monitoring. **Use Cases** Argilla is beneficial when labeling textual data in your ML workflow. It can be integrated into a ZenML stack for annotation at various stages. **Deployment** To deploy Argilla, install the ZenML Argilla integration: ```shell zenml integration install argilla ``` You can register the API key directly or as a secret for security. To register as a secret: ```shell zenml secret create argilla_secrets --api_key="" ``` Then, register the annotator: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` For a deployed instance, specify the instance URL without a trailing `/`. If using a private Hugging Face Spaces instance, include the `headers` parameter with your token: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' ``` Add components to a stack and set it as active: ```shell zenml stack copy default annotation zenml stack update annotation -an zenml stack set annotation ``` Verify the setup with: ```shell zenml annotator dataset list ``` **Usage** Access data and annotations via the CLI: - List datasets: `zenml annotator dataset list` - Annotate a dataset: `zenml annotator dataset annotate ` **Argilla Annotator Component** The Argilla annotator inherits from `BaseAnnotator`, requiring core methods for dataset registration and retrieval. It supports dataset registration, annotation export, and starting the annotator daemon process. **Argilla Annotator SDK** To use the SDK in Python: ```python from zenml.client import Client client = Client() annotator = client.active_stack.annotator # List dataset names dataset_names = annotator.get_dataset_names() # Get a specific dataset dataset = annotator.get_dataset("dataset_name") # Get annotations for a dataset annotations = annotator.get_labeled_data(dataset_name="dataset_name") ``` For more details, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/). ================================================== === File: docs/book/component-guide/annotators/pigeon.md === # Pigeon: Data Annotation Tool Pigeon is an open-source annotation tool for labeling data within Jupyter notebooks, supporting: - Text Classification - Image Classification - Text Captioning ## Use Cases Pigeon is ideal for: - Labeling small to medium datasets in ML workflows. - Quick labeling tasks without a full annotation platform. - Iterative and collaborative labeling during the exploratory phase. ## Deployment Steps 1. **Install Pigeon Integration**: ```shell zenml integration install pigeon ``` 2. **Register the Annotator**: ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` 3. **Update Your Stack**: ```shell zenml stack update --annotator pigeon ``` ## Usage Access the Pigeon annotator in your Jupyter notebook: ### For Text Classification: ```python from zenml.client import Client annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['I love this movie', 'I was really disappointed by the book'], options=['positive', 'negative'] ) ``` ### For Image Classification: ```python from zenml.client import Client from IPython.display import display, Image annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['/path/to/image1.png', '/path/to/image2.png'], options=['cat', 'dog'], display_fn=lambda filename: display(Image(filename)) ) ``` ### Dataset Management Commands: - List datasets: `zenml annotator dataset list` - Delete a dataset: `zenml annotator dataset delete ` - Get dataset stats: `zenml annotator dataset stats ` Annotations are saved as JSON files in the specified output directory, with filenames as dataset names. ## Acknowledgements Pigeon was developed by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. ================================================== === File: docs/book/component-guide/annotators/custom.md === # Develop a Custom Annotator **Warning:** This is an older version of the ZenML documentation. For the latest version, visit [this up-to-date URL](https://docs.zenml.io). ## Overview Custom annotators are stack components in ZenML that facilitate data annotation within your pipelines. You can use the CLI to launch annotation, configure datasets, and retrieve statistics on labeled tasks. **Note:** The base abstraction for annotators is currently in development, and extension is not yet possible. For immediate use, refer to the list of available feature stores. ## Additional Resources Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. ================================================== === File: docs/book/component-guide/model-deployers/vllm.md === ### Deploying LLM Locally with vLLM **vLLM Overview** [vLLM](https://docs.vllm.ai/en/latest/) is a library designed for efficient LLM inference and serving, offering features such as: - High throughput with OpenAI-compatible API server - Continuous request batching - Quantization options: GPTQ, AWQ, INT4, INT8, FP8 - Advanced features: PagedAttention, Speculative decoding, Chunked pre-fill **Deployment Steps** 1. **Install vLLM Integration** Run the following command to install the vLLM integration for ZenML: ```bash zenml integration install vllm -y ``` 2. **Register the Model Deployer** Register the vLLM model deployer with ZenML: ```bash zenml model-deployer register vllm_deployer --flavor=vllm ``` This sets up a local vLLM server as a daemon process for serving models. **Usage Example** To see vLLM in action, refer to the [deployment pipeline example](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25). **Deploying an LLM** Use the `vllm_model_deployer_step` to deploy a model in your pipeline. Here’s a concise example: ```python from zenml import pipeline from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "GPT2"]: return vllm_model_deployer_step(model=model, timeout=timeout) ``` **Configuration Options** Within the `VLLMDeploymentService`, you can configure: - `model`: Hugging Face model name or path - `tokenizer`: Hugging Face tokenizer name or path (defaults to model name) - `served_model_name`: API model name (defaults to model name) - `trust_remote_code`: Trust remote code from Hugging Face - `tokenizer_mode`: Options: ['auto', 'slow', 'mistral'] - `dtype`: Data type for model weights (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']) - `revision`: Specific model version (branch name, tag, or commit ID; defaults to latest) For further details, refer to the [vLLM GitHub repository](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer). ================================================== === File: docs/book/component-guide/model-deployers/huggingface.md === ### Summary of Hugging Face Inference Endpoints Deployment Documentation **Overview**: Hugging Face Inference Endpoints allow for secure, production-ready deployment of `transformers`, `sentence-transformers`, and `diffusers` models on managed infrastructure, eliminating the need for containers and GPUs. **When to Use**: - Deploy models on dedicated, secure infrastructure. - Require a fully-managed production solution with minimal MLOps involvement. - Need cost-effective deployment, paying only for raw compute resources. - Prioritize enterprise security with offline endpoints connected to Virtual Private Clouds (VPCs). **Installation**: To deploy models, install the Hugging Face ZenML integration: ```bash zenml integration install huggingface -y ``` **Registering the Model Deployer**: Register the Hugging Face model deployer: ```bash zenml model-deployer register --flavor=huggingface --token= --namespace= ``` - `token`: Hugging Face authentication token. - `namespace`: User or organization name for inference endpoints. **Updating the Stack**: Integrate the model deployer into your ZenML stack: ```bash zenml stack update --model-deployer= ``` **Usage**: Two main methods to utilize the Hugging Face model deployer: 1. **Deploying a Model**: Use the `huggingface_model_deployer_step` in your pipeline. 2. **Running Inference**: Utilize `HuggingFaceDeploymentService` for batch inference. **Example of Model Deployment**: ```python from zenml import pipeline from zenml.config import DockerSettings from zenml.integrations.huggingface.services import HuggingFaceServiceConfig from zenml.integrations.huggingface.steps import huggingface_model_deployer_step docker_settings = DockerSettings(required_integrations=[HUGGINGFACE]) @pipeline(enable_cache=True, settings={"docker": docker_settings}) def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): service_config = HuggingFaceServiceConfig(model_name=model_name) huggingface_model_deployer_step(service_config=service_config, timeout=timeout) ``` **Configurable Attributes**: - `model_name`, `endpoint_name`, `repository`, `framework`, `accelerator`, `instance_size`, `instance_type`, `region`, `vendor`, `token`, `account_id`, `min_replica`, `max_replica`, `revision`, `task`, `custom_image`, `namespace`, `endpoint_type`. **Running Inference Example**: ```python from zenml import step, pipeline from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer from zenml.integrations.huggingface.services import HuggingFaceDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) if not existing_services: raise RuntimeError(f"No inference endpoint found.") return existing_services[0] @step def predictor(service: HuggingFaceDeploymentService, data: str) -> str: return service.predict(data) @pipeline def huggingface_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). ================================================== === File: docs/book/component-guide/model-deployers/databricks.md === ### Summary: Deploying Models to Databricks Inference Endpoints with ZenML **Overview:** Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs, without managing containers or GPUs. It offers dedicated, autoscaling infrastructure managed by Databricks. **When to Use Databricks Model Deployer:** - You are using Databricks for data and ML workloads. - You want to deploy models without container management. - You need enterprise security for offline endpoints. - You aim to create production-ready APIs with minimal MLOps involvement. **Installation:** To use the Databricks Model Deployer, install the ZenML Databricks integration: ```bash zenml integration install databricks -y ``` **Registering the Model Deployer:** Register the Databricks model deployer: ```bash zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` *Note: Create a Databricks service account for permissions and generate `client_id` and `client_secret` for authentication.* **Update Stack:** Update your ZenML stack to include the model deployer: ```bash zenml stack update --model-deployer= ``` **Configuration Options:** In `DatabricksServiceConfig`, configure: - `model_name`: Name of the model in Databricks Model Registry. - `model_version`: Version of the model. - `workload_size`: Size options: `Small`, `Medium`, `Large`. - `scale_to_zero_enabled`: Enable/disable scale to zero. - `env_vars`: Environment variables for the model. - `workload_type`: Options: `CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, `MULTIGPU_MEDIUM`. - `endpoint_secret_name`: Secret for securing the endpoint. **Running Inference:** Example code to run inference on a provisioned endpoint: ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer from zenml.integrations.databricks.services import DatabricksDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> DatabricksDeploymentService: model_deployer = DatabricksModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name, running) if not existing_services: raise RuntimeError(f"No running inference endpoint found for '{model_name}'.") return existing_services[0] @step def predictor(service: DatabricksDeploymentService, data: str) -> str: return service.predict(data) @pipeline def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "databricks_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name, pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === # Model Deployers Model deployment involves making machine learning models available for predictions on real-world data. Predictions can be made in two ways: batch (for large datasets) and real-time (for single data points). Model deployers serve models either in real-time or batch mode through managed web services accessible via APIs (HTTP or GRPC). ## Use Cases Model deployers are optional components in the ZenML stack, primarily used for deploying models in development or production environments (local, Kubernetes, or cloud). They facilitate continuous training and deployment pipelines. ## Architecture Model deployers fit into the ZenML stack, enabling efficient model management across various environments. ### Available Model Deployer Flavors ZenML provides several model deployers: - **MLflow**: Local deployment. - **BentoML**: Local or production-grade deployment. - **Seldon Core**: Kubernetes-based production deployment. - **Hugging Face**: Deployment on Hugging Face Inference Endpoints. - **Databricks**: Deployment on Databricks Inference Endpoints. - **vLLM**: Local LLM deployment. - **Custom Implementation**: Extendable for custom deployments. ### Configuration Example Model deployers require specific attributes for configuration. Here’s how to configure MLflow and Seldon Core: ```shell # Configure MLflow model deployer zenml model-deployer register mlflow --flavor=mlflow # Configure Seldon Core model deployer zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ --base_url=http:// ``` ### Role in ZenML Stack - **Seamless Deployment**: Deploy models to various environments while managing configuration attributes. - **Lifecycle Management**: Manage model servers (start, stop, delete, update) efficiently. ### Core Methods - `deploy_model`: Deploys a model and returns a Service object. - `find_model_server`: Lists deployed model servers. - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage server states. ### Service Object Represents a deployed model server, containing: - `config`: Deployment configuration. - `status`: Operational status (e.g., prediction URL). ### Interaction Example To interact with the model deployer: ```python from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() services = model_deployer.find_model_server(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B") if services: if services[0].is_running: print(f"Model server {services[0].config['model_name']} is running at {services[0].status['prediction_url']}") else: model_deployer.start_model_server(services[0]) else: service = model_deployer.deploy_model(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B", model_uri="s3://", ...) print(f"Model server {service.config['model_name']} is deployed at {service.status['prediction_url']}") ``` ### CLI Interaction Use the CLI to manage model servers: ```shell $ zenml model-deployer models list $ zenml model-deployer models describe $ zenml model-deployer models get-url $ zenml model-deployer models delete ``` ### Python Metadata Access Access the prediction URL via Python: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") deployer_step = pipeline_run.steps[""] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` ZenML integrations also include standard pipeline steps for continuous model deployment, managing the deployment workflow and storing Service configurations in the Artifact Store for later use. ================================================== === File: docs/book/component-guide/model-deployers/bentoml.md === ### Summary of Deploying Models Locally with BentoML **BentoML Overview** BentoML is an open-source framework for serving machine learning models, enabling deployment locally, in the cloud, or on Kubernetes. The BentoML Model Deployer, part of the ZenML stack, allows for the management of BentoML models on a local HTTP server. **Deployment Paths** 1. **Local HTTP Server**: For development and production use. 2. **Containerized Service**: For more complex production settings. **Tools** - **Yatai**: For deploying Bentos to Kubernetes and cloud platforms. - **bentoctl**: Deprecated, previously used for cloud deployments. **When to Use BentoML Model Deployer** - To standardize model deployment within an organization. - For simple model deployment that can evolve into a production-ready solution. **Deployment Steps** 1. **Install BentoML Integration**: ```bash zenml integration install bentoml -y ``` 2. **Register the Model Deployer**: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` This sets up a local HTTP server to serve models. **Creating a BentoML Service** Define a BentoML service to serve your model. Example for a PyTorch model: ```python import bentoml from bentoml.validators import DType, Shape import numpy as np import torch @bentoml.service(name="MNISTService") class MNISTService: def __init__(self): self.model = bentoml.pytorch.load_model("MODEL_NAME") self.model.eval() @bentoml.api() async def predict_ndarray(self, inp: np.ndarray) -> np.ndarray: inp = np.expand_dims(inp, (0, 1)) output_tensor = await self.model(torch.tensor(inp)) return to_numpy(output_tensor) ``` **Building a Bento** You can build a Bento manually or use the `bento_builder_step`. Example of a custom bento builder: ```python from zenml import step @step def my_bento_builder(model) -> bento.Bento: model = load_artifact_from_response(model) bentoml.pytorch.save_model("model_name", model) bento = bentos.build(service=service, models=["model_name"]) return bento ``` **Using the Bento Builder Step** Integrate the built-in bento builder step in a ZenML pipeline: ```python from zenml import pipeline from zenml.integrations.bentoml.steps import bento_builder_step @pipeline def bento_builder_pipeline(): bento = bento_builder_step(model=model, model_name="pytorch_mnist", service="service.py:CLASS_NAME") ``` **Deploying the Bento** Use the `bentoml_model_deployer_step` to deploy the bento bundle: - **Local Deployment**: ```python @pipeline def bento_deployer_pipeline(): deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) ``` - **Containerized Deployment**: ```python @pipeline def bento_deployer_pipeline(): deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", deployment_type="container", image="my-custom-image") ``` **Predicting with Deployed Model** Use the BentoML client to send requests to the deployed model: ```python @step def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService) -> None: service.start(timeout=10) for img, data in inference_data.items(): prediction = service.predict("predict_ndarray", np.array(data)) ``` **From Local to Cloud with `bentoctl`** Though deprecated, `bentoctl` was used for deploying models to cloud environments like AWS, Google Cloud, and Azure. For more details, refer to the [BentoML documentation](https://docs.bentoml.org). This summary encapsulates the essential steps and code snippets for deploying models locally using BentoML while maintaining critical technical details. ================================================== === File: docs/book/component-guide/model-deployers/custom.md === # Custom Model Deployer in ZenML ZenML provides a `Model Deployer` component for deploying and managing machine learning models, allowing interaction with various deployment tools, frameworks, or platforms. It serves as a registry for models and supports operations like listing, suspending, resuming, or deleting models. ## Base Abstraction The model deployer is built on three main criteria: 1. **Deployment Efficiency**: Manages model deployment according to the serving infrastructure's requirements, holding necessary configuration attributes. 2. **Continuous Deployment**: Implements logic to update existing model servers instead of creating new ones for each model version (via `deploy_model` method). 3. **BaseService Registry**: Acts as a registry for remote model servers, enabling recreation of `BaseService` instances from persisted configurations, such as Kubernetes resource annotations. The model deployer also includes lifecycle management methods for remote servers: `stop_model_server`, `start_model_server`, and `delete_model_server`. ### Interface ```python from abc import ABC, abstractmethod from typing import Dict, Optional, Type from uuid import UUID from zenml.enums import StackComponentType from zenml.services import BaseService, ServiceConfig from zenml.stack import StackComponent, StackComponentConfig, Flavor DEFAULT_TIMEOUT = 300 class BaseModelDeployerConfig(StackComponentConfig): """Base class for model deployer configurations.""" class BaseModelDeployer(StackComponent, ABC): @abstractmethod def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = DEFAULT_TIMEOUT) -> BaseService: """Deploy a model.""" @staticmethod @abstractmethod def get_model_server_info(service: BaseService) -> Dict[str, Optional[str]]: """Extract model server properties.""" @abstractmethod def perform_stop_model(self, service: BaseService, timeout: int = DEFAULT_TIMEOUT, force: bool = False) -> BaseService: """Stop a model server.""" @abstractmethod def perform_start_model(self, service: BaseService, timeout: int = DEFAULT_TIMEOUT) -> BaseService: """Start a model server.""" @abstractmethod def perform_delete_model(self, service: BaseService, timeout: int = DEFAULT_TIMEOUT, force: bool = False) -> None: """Delete a model server.""" class BaseModelDeployerFlavor(Flavor): @property @abstractmethod def name(self): """Flavor name.""" @property def type(self) -> StackComponentType: return StackComponentType.MODEL_DEPLOYER @property def config_class(self) -> Type[BaseModelDeployerConfig]: return BaseModelDeployerConfig @property @abstractmethod def implementation_class(self) -> Type[BaseModelDeployer]: """Implementing class.""" ``` ### Building Custom Model Deployers To create a custom model deployer flavor: 1. Inherit from `BaseModelDeployer` and implement the abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. 3. Combine both by inheriting from `BaseModelDeployerFlavor`, providing a `name`. 4. Implement a service class inheriting from `BaseService`. Register the flavor via CLI: ```shell zenml model-deployer flavor register ``` Example registration: ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` ### Important Notes - The custom flavor is utilized upon creation via CLI. - The configuration class is used during stack component registration for validation. - The implementation class is used when the component is in operation, allowing separation of configuration and implementation. Ensure ZenML is initialized at the root of your repository for proper flavor resolution. After registration, list available flavors: ```shell zenml model-deployer flavor list ``` This documentation provides a concise overview of developing custom model deployers in ZenML, emphasizing key technical details and code structure. ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === ### Summary: Deploying Models to Kubernetes with Seldon Core **Seldon Core Overview** Seldon Core is a production-grade model serving platform that facilitates deploying machine learning models as REST/GRPC microservices. It includes features such as monitoring, logging, model explainers, outlier detectors, and advanced deployment strategies like A/B testing and canary deployments. It supports standard formats for packaging ML models, simplifying real-time inference. **Usage Scenarios** Use Seldon Core when: - Deploying on advanced infrastructures like Kubernetes. - Managing model lifecycle with no downtime. - Requiring advanced API endpoints (REST/GRPC). - Needing complex deployment processes with custom transformers and routers. For simpler local deployments, consider using the MLflow Model Deployer. **Deployment Steps** 1. **Install Seldon Core Integration**: ```bash zenml integration install seldon -y ``` 2. **Prerequisites**: - Access to a Kubernetes cluster (configured via `kubernetes_context`). - Seldon Core pre-installed in the cluster. - Models stored in persistent shared storage accessible from the Kubernetes cluster (e.g., AWS S3, GCS). 3. **Configuration Parameters**: - `kubernetes_context`: Context for contacting the Seldon Core installation. - `kubernetes_namespace`: Namespace for Seldon Core deployment. - `base_url`: Base URL for the Kubernetes ingress. **Installation Example on EKS**: 1. Configure EKS access: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks ``` 2. Install Istio: ```bash curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo ``` 3. Set up Istio gateway: ```bash curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - ``` 4. Install Seldon Core: ```bash helm install seldon-core seldon-core-operator \ --repo https://storage.googleapis.com/seldon-charts \ --set usageMetrics.enabled=true \ --set istio.enabled=true \ --namespace seldon-system ``` 5. Test installation: ```bash kubectl apply -f iris.yaml ``` Example `iris.yaml`: ```yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: iris-model namespace: default spec: name: iris predictors: - graph: implementation: SKLEARN_SERVER modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris name: classifier name: default replicas: 1 ``` 6. Extract prediction API URL: ```bash export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') ``` 7. Send a test prediction request: ```bash curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \ -H 'Content-Type: application/json' \ -d '{ "data": { "ndarray": [[1,2,3,4]] } }' ``` **Service Connector Setup** To authenticate to a remote Kubernetes cluster, use Service Connectors for auto-configuration and security. Register a Service Connector: ```bash zenml service-connector register --type aws --resource-type kubernetes-cluster --resource-name --auto-configure ``` **Model Deployer Registration**: ```bash zenml model-deployer register --flavor=seldon \ --kubernetes_namespace= \ --base_url=http://$INGRESS_HOST ``` **Managing Authentication** Ensure the Seldon Core Model Deployer has access to the persistent storage where models are located. Explicit credentials may be necessary if Seldon Core runs in a different cloud or if implicit authentication is not enabled. **Custom Code Deployment** Define a custom prediction function and use `seldon_custom_model_deployer_step` to deploy it: ```python @pipeline def seldon_deployment_pipeline(): model = ... seldon_custom_model_deployer_step( model=model, predict_function="", service_config=SeldonDeploymentConfig( model_name="", replicas=1, implementation="custom", resources=SeldonResourceRequirements( limits={"cpu": "200m", "memory": "250Mi"} ), serviceAccountName="kubernetes-service-account", ), ) ``` This summary captures the essential steps and configurations for deploying models using Seldon Core on Kubernetes, along with examples and key considerations for authentication and custom deployments. ================================================== === File: docs/book/component-guide/model-deployers/mlflow.md === ### Summary of MLflow Model Deployer Documentation **Overview:** The MLflow Model Deployer, part of ZenML's stack components, allows for local deployment and management of MLflow models on a local MLflow server. It is currently intended for development environments and is not yet production-ready. **When to Use:** - For easy local model deployment and real-time predictions. - When a simple deployment setup is preferred over complex environments like Kubernetes. **Installation and Setup:** To use the MLflow Model Deployer, install the MLflow integration with: ```bash zenml integration install mlflow -y ``` Register the model deployer: ```bash zenml model-deployer register mlflow_deployer --flavor=mlflow ``` This sets up a local MLflow server to serve the latest model. **Deployment Process:** 1. **Deploying a Logged Model:** Use the model URI from the MLflow experiment tracker: ```python from zenml import step, get_step_context from zenml.client import Client @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="An example of deploying a model using the MLflow Model Deployer", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri="runs://model" or "models://", model_name="model", workers=1, mlserver=False, timeout=DEFAULT_SERVICE_START_STOP_TIMEOUT ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` 2. **Deploying a Model Without Known URI:** Retrieve the model URI from the current run: ```python from zenml import step, get_step_context from zenml.client import Client from mlflow.tracking import MlflowClient, artifact_utils @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer experiment_tracker = zenml_client.active_stack.experiment_tracker mlflow_run_id = experiment_tracker.get_run_id( experiment_name=get_step_context().pipeline_name, run_name=get_step_context().run_name, ) experiment_tracker.configure_mlflow() client = MlflowClient() model_uri = artifact_utils.get_artifact_uri( run_id=mlflow_run_id, artifact_path="model" ) mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="An example of deploying a model using the MLflow Model Deployer", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri=model_uri, model_name="model", workers=1, mlserver=False, timeout=300, ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` **Configuration Options:** - `name`, `description`, `pipeline_name`, `pipeline_step_name`: Metadata for the deployment. - `model_uri`: URI of the model (local path, run ID, or model name/version). - `workers`: Number of workers for the MLflow server. - `mlserver`: If True, starts the server as a MLServer instance. - `timeout`: Time to wait for the server to start/stop. **Running Inference:** 1. **Load a Deployed Service:** ```python import json import requests from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str) -> None: model_deployer = MLFlowModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) if not existing_services: raise RuntimeError("No running service found.") service = existing_services[0] payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}}) response = requests.post(url=service.get_prediction_url(), data=payload, headers={"Content-Type": "application/json"}) return response.json() ``` 2. **Use Service for Inference:** ```python from typing_extensions import Annotated import numpy as np from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step def predictor(service: MLFlowDeploymentService, data: np.ndarray) -> Annotated[np.ndarray, "predictions"]: prediction = service.predict(data) return prediction.argmax(axis=-1) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). ================================================== === File: docs/book/component-guide/container-registries/azure.md === ### Azure Container Registry with ZenML **Overview**: The Azure Container Registry (ACR) is integrated with ZenML for storing container images. It's suitable for users with Azure access who need to pull or push container images. #### Deployment Steps 1. **Create ACR**: - Go to [Azure Portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). - Select subscription, resource group, location, and registry name, then click `Review + Create`. 2. **Find Registry URI**: - Format: `.azurecr.io` - Access via Azure Portal: Search for `container registries`, select your registry, and derive the URI. #### Usage Requirements - **Docker**: Must be installed and running. - **Registry URI**: Obtain from the previous section. #### Registering the Container Registry ```shell zenml container-registry register --flavor=azure --uri= zenml stack update -c ``` #### Authentication Methods 1. **Local Authentication**: - Quick setup using local Docker client credentials. - Requires Azure CLI installed. - Login command: ```shell az acr login --name= ``` - **Note**: Not portable across environments. 2. **Azure Service Connector (Recommended)**: - Provides auto-configuration and better security. - Register using: ```sh zenml service-connector register --type azure -i ``` - Non-interactive example: ```sh zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id ``` #### Connecting to ACR - Register and connect the Azure Container Registry: ```sh zenml container-registry register -f azure --uri= zenml container-registry connect -i ``` - Non-interactive connection: ```sh zenml container-registry connect --connector ``` #### Using ACR in ZenML Stack ```sh zenml stack register -c ... --set ``` #### Local Login for Docker CLI To temporarily authenticate your local Docker client: ```sh zenml service-connector login --resource-type docker-registry --resource-id ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/github.md === ### GitHub Container Registry Overview The GitHub Container Registry, integrated with ZenML, allows for the storage of container images. It is suitable for projects using GitHub, especially when components need to pull or push images. #### When to Use - If your stack components require image interactions. - If you are using GitHub for your projects. #### Deployment - The registry is enabled by default upon creating a GitHub account. #### Registry URI Format The URI follows this format: ```shell ghcr.io/ ``` **Examples:** - `ghcr.io/zenml` - `ghcr.io/my-username` - `ghcr.io/my-organization` #### Usage Requirements 1. **Docker**: Must be installed and running. 2. **Registry URI**: Obtainable using the format above. 3. **Docker Client Configuration**: Follow the guide to create a personal access token and authenticate. #### Registering the Container Registry To register and use the GitHub container registry in your active stack: ```shell zenml container-registry register \ --flavor=github \ --uri= zenml stack update -c ``` For further details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/gcp.md === ### Summary: Storing Container Images in GCP #### Google Cloud Container Registry - GCP's container registry is integrated with ZenML and utilizes the Google Artifact Registry. - **Important Notice**: Google Container Registry is being replaced by Artifact Registry. Transition to Artifact Registry is required by May 15, 2024, with shutdown scheduled for March 18, 2025. #### When to Use Use the GCP container registry if: - Your stack components need to pull/push container images. - You have access to GCP. #### Deployment Steps 1. **Enable Google Artifact Registry**: [Enable here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). 2. **Create a Docker Repository**: [Create here](https://console.cloud.google.com/artifacts). #### Registry URI Format The URI format is: ```shell -docker.pkg.dev// ``` Examples: ```shell europe-west1-docker.pkg.dev/zenml/my-repo southamerica-east1-docker.pkg.dev/zenml/zenml-test ``` #### Using the GCP Container Registry Prerequisites: - Install and run Docker. - Obtain the registry URI. Register the container registry: ```shell zenml container-registry register --flavor=gcp --uri= zenml stack update -c ``` #### Authentication Methods Authentication is necessary to use the GCP Container Registry: - **Local Authentication**: Quick setup using local Docker client credentials. - Configure Docker for Google Container Registry: ```shell gcloud auth configure-docker ``` - For Google Artifact Registry: ```shell gcloud auth configure-docker -docker.pkg.dev ``` - **GCP Service Connector (Recommended)**: Provides better security and management for credentials. Register a connector: ```shell zenml service-connector register --type gcp -i ``` Connect to a GCR registry: ```shell zenml container-registry connect -i ``` #### Final Steps To use the GCP Container Registry in a ZenML Stack: ```shell zenml stack register -c ... --set ``` For detailed configuration attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/dockerhub.md === ### DockerHub Container Registry in ZenML **Overview**: The DockerHub container registry is integrated with ZenML for storing container images. **When to Use**: - If your stack components need to pull/push container images. - If you have a DockerHub account. **Deployment**: 1. Create a DockerHub account. 2. By default, images are published in a **public** repository. For a **private** repository, create one on DockerHub before running the pipeline. 3. The repository name depends on the orchestrator or step operator used in your stack. **Registry URI Format**: The DockerHub registry URI can be in one of these formats: ```shell # or docker.io/ ``` **Examples**: - `zenml` - `my-username` - `docker.io/zenml` - `docker.io/my-username` **Finding the Registry URI**: - Use your DockerHub account name to construct the URI using the format `docker.io/`. **Usage**: 1. Ensure Docker is installed and running. 2. Register the container registry in your active stack: ```shell zenml container-registry register \ --flavor=dockerhub \ --uri= zenml stack update -c ``` 3. Log in to DockerHub for image operations: ```shell docker login ``` You will need your DockerHub account name and either your password or a personal access token. For detailed configuration options, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/container-registries.md === ### Container Registries Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipeline code for isolated execution. #### When to Use A container registry is necessary when components of your stack need to push or pull container images. This applies to most of ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation of the specific component to determine if a container registry is required. #### Container Registry Flavors ZenML supports several container registry flavors: - **Default Flavor**: Accepts any URI without validation; suitable for local or unsupported remote registries. - **Specific Flavors**: Validates URIs and ensures push capability. **Recommendation**: Use specific container registry flavors for additional URI validations. | Container Registry | Flavor | Integration | URI Example | |--------------------|---------|-------------|-----------------------------------------| | DefaultContainerRegistry | `default` | _built-in_ | - | | DockerHubContainerRegistry | `dockerhub` | _built-in_ | docker.io/zenml | | GCPContainerRegistry | `gcp` | _built-in_ | gcr.io/zenml | | AzureContainerRegistry | `azure` | _built-in_ | zenml.azurecr.io | | GitHubContainerRegistry | `github` | _built-in_ | ghcr.io/zenml | | AWSContainerRegistry | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use the command: ```shell zenml container-registry flavor list ``` ================================================== === File: docs/book/component-guide/container-registries/aws.md === ### Summary: Storing Container Images in Amazon ECR **Amazon Elastic Container Registry (ECR)** is integrated with ZenML for storing container images. Use it when your stack components require pulling or pushing images and you have AWS ECR access. #### Deployment Steps 1. **Create an AWS Account**: ECR is activated upon account creation. 2. **Create a Repository**: - Visit the [ECR website](https://console.aws.amazon.com/ecr). - Select the region. - Click on `Create repository` and create a private repository. #### URI Format The ECR URI format is: ``` .dkr.ecr..amazonaws.com ``` Example URIs: ``` 123456789.dkr.ecr.eu-west-2.amazonaws.com ``` To find your URI: - Get your `Account ID` from the AWS console. - Choose the region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). #### Using AWS Container Registry 1. **Install ZenML AWS Integration**: ```shell zenml integration install aws ``` 2. **Install Docker**. 3. **Register the Container Registry**: ```shell zenml container-registry register --flavor=aws --uri= zenml stack update -c ``` #### Authentication Methods - **Local Authentication**: Quick setup using local AWS CLI credentials. ```shell aws ecr get-login-password --region | docker login --username AWS --password-stdin ``` - **AWS Service Connector** (recommended): Provides better security and management. ```sh zenml service-connector register --type aws -i ``` Non-interactive version: ```sh zenml service-connector register --type aws --resource-type docker-registry --auto-configure ``` #### Connecting AWS Container Registry To connect the container registry to an ECR registry: ```sh zenml container-registry connect -i ``` Non-interactive: ```sh zenml container-registry connect --connector ``` #### Final Steps Register a stack with the new container registry: ```sh zenml stack register -c ... --set ``` For local Docker client access to the remote registry: ```sh zenml service-connector login --resource-type docker-registry ``` For detailed attributes of the AWS container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === ### Developing a Custom Container Registry in ZenML #### Overview This documentation outlines how to create a custom container registry in ZenML, emphasizing the base abstractions and implementation steps. #### Base Abstraction ZenML's container registries have a simple base structure, requiring only a `uri`. The `BaseContainerRegistry` class includes a non-abstract `prepare_image_push` method for validation. **Key Classes:** - **BaseContainerRegistryConfig**: Holds the configuration with a `uri`. - **BaseContainerRegistry**: Implements methods for preparing and pushing images. - **BaseContainerRegistryFlavor**: Defines the flavor structure, including properties for name, type, and associated classes. **Code Snippet:** ```python from abc import abstractmethod from typing import Type from zenml.enums import StackComponentType from zenml.stack import Flavor from zenml.stack.authentication_mixin import AuthenticationConfigMixin, AuthenticationMixin from zenml.utils import docker_utils class BaseContainerRegistryConfig(AuthenticationConfigMixin): uri: str class BaseContainerRegistry(AuthenticationMixin): def prepare_image_push(self, image_name: str) -> None: pass def push_image(self, image_name: str) -> str: if not image_name.startswith(self.config.uri): raise ValueError(f"Docker image `{image_name}` does not belong to container registry `{self.config.uri}`.") self.prepare_image_push(image_name) return docker_utils.push_image(image_name) class BaseContainerRegistryFlavor(Flavor): @property @abstractmethod def name(self) -> str: pass @property def type(self) -> StackComponentType: return StackComponentType.CONTAINER_REGISTRY @property def config_class(self) -> Type[BaseContainerRegistryConfig]: return BaseContainerRegistryConfig @property def implementation_class(self) -> Type[BaseContainerRegistry]: return BaseContainerRegistry ``` #### Building Your Own Container Registry To create a custom flavor: 1. Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push validations. 2. Create a configuration class inheriting from `BaseContainerRegistryConfig`. 3. Combine both by inheriting from `BaseContainerRegistryFlavor`. **Registering the Flavor:** Use the CLI to register your flavor: ```shell zenml container-registry flavor register ``` For example: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, list available flavors with: ```shell zenml container-registry flavor list ``` #### Workflow Integration - **CustomContainerRegistryFlavor**: Used during flavor creation. - **CustomContainerRegistryConfig**: Validates user inputs during stack component registration. - **CustomContainerRegistry**: Engaged when the component is in use, allowing separation of configuration and implementation. This structure supports registering flavors even if their dependencies are not installed locally. ================================================== === File: docs/book/component-guide/container-registries/default.md === ### Summary: Storing Container Images Locally with ZenML **Default Container Registry**: ZenML provides a built-in Default container registry that supports various URI formats for local and remote registries not covered by other flavors. #### When to Use - Use for a **local container registry** or unsupported remote registries. #### Local Registry URI Format - Format: `localhost:` - Examples: `localhost:5000`, `localhost:8000`, `localhost:9999` #### Usage Steps 1. Ensure **Docker** is installed and running. 2. Register the container registry: ```shell zenml container-registry register --flavor=default --uri= ``` 3. Update the active stack: ```shell zenml stack update -c ``` #### Authentication Methods - **Private Registries**: Configure authentication; for local setups, use Local Authentication. - **Local Authentication**: Leverages Docker client credentials: ```shell docker login --username --password-stdin ``` *Note: Not portable across environments; use Docker Service Connector for portability.* - **Docker Service Connector**: Recommended for accessing private registries. Register via: ```sh zenml service-connector register --type docker -i ``` Non-interactive: ```sh zenml service-connector register --type docker --username= --password= ``` #### Connecting to a Container Registry 1. Register the container registry: ```sh zenml container-registry register -f default --uri= ``` 2. Connect via Docker Service Connector: ```sh zenml container-registry connect -i ``` Non-interactive: ```sh zenml container-registry connect --connector ``` #### Final Steps - Use the Default Container Registry in a ZenML Stack: ```sh zenml stack register -c ... --set ``` #### Local Client Authentication To temporarily authenticate your local Docker client: ```sh zenml service-connector login ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.default_container_registry.DefaultContainerRegistry). ================================================== === File: docs/book/component-guide/image-builders/local.md === ### Local Image Builder Overview The Local Image Builder in ZenML utilizes the local Docker installation on your machine to build container images. It employs the official Docker Python library, which accesses authentication credentials from `$HOME/.docker/config.json`. To specify a different configuration directory, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` Ensure the specified directory contains a `config.json` file. ### When to Use Use the Local Image Builder if: - You can install and use Docker on your client machine. - You want to utilize remote components requiring containerization without additional infrastructure setup. ### Deployment and Usage The Local Image Builder is built into ZenML and requires no extra setup. To use it, ensure: - Docker is installed and running. - The Docker client is authenticated to push to your chosen container registry. To register the image builder and create a new stack, use: ```shell zenml image-builder register --flavor=local zenml stack register -i ... --set ``` For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/gcp.md === ### Google Cloud Image Builder with ZenML The Google Cloud Image Builder is a component of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) for building container images. #### When to Use - If you cannot install or use [Docker](https://www.docker.com) locally. - If you are already using Google Cloud Platform (GCP). - If your stack includes other GCP components (e.g., [GCS Artifact Store](../artifact-stores/gcp.md), [Vertex Orchestrator](../orchestrators/vertex.md)). #### Deployment Requirements 1. Enable Google Cloud Build APIs in your GCP project. 2. Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` 3. Set up: - A [GCP Artifact Store](../artifact-stores/gcp.md) for build context. - A [GCP container registry](../container-registries/gcp.md) for the built image. - Optionally, specify a GCP project ID and service account with necessary permissions. #### Configuration Options - Change the Docker image used for building (default: `'gcr.io/cloud-builders/docker'`). - Specify the Docker network and build timeout. #### Registering the Image Builder ```shell zenml image-builder register \ --flavor=gcp \ --cloud_builder_image= \ --network= \ --build_timeout= zenml stack register -i ... --set ``` #### Authentication Methods 1. **Local Authentication**: Quick setup using local GCP CLI credentials. - Requires Google Cloud CLI installation. - Not portable across environments. 2. **GCP Service Connector (Recommended)**: - Provides auto-configuration and better security. - Register using: ```sh zenml service-connector register --type gcp -i ``` - For auto-configuration: ```sh zenml service-connector register --type gcp --resource-type gcp-generic --resource-name --auto-configure ``` 3. **GCP Credentials**: - Generate a GCP Service Account Key and reference it in the Image Builder configuration. - Example registration: ```shell zenml image-builder register \ --flavor=gcp \ --project= \ --service_account_path= \ --cloud_builder_image= \ --network= \ --build_timeout= ``` #### Caveats - Google Cloud Build uses a `cloudbuild` network for builds, allowing access to GCP services with Application Default Credentials (ADC). - For private dependencies in GCP Artifact Registry, use a custom base image with `keyrings.google-artifactregistry-auth`: ```dockerfile FROM zenmldocker/zenml:latest RUN pip install keyrings.google-artifactregistry-auth ``` - Specify the ZenML version in the base image tag for consistency. This summary provides essential details for using the Google Cloud Image Builder with ZenML, including setup, registration, authentication, and caveats. ================================================== === File: docs/book/component-guide/image-builders/kaniko.md === ### Kaniko Image Builder Overview The Kaniko image builder, part of ZenML's `kaniko` integration, utilizes [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images. It is ideal for users who cannot install Docker locally and are familiar with Kubernetes. ### Prerequisites 1. **Kubernetes Cluster**: A deployed Kubernetes cluster is required. 2. **ZenML Integration**: Install the Kaniko integration: ```shell zenml integration install kaniko ``` 3. **kubectl**: Must be installed for Kubernetes management. 4. **Container Registry**: A remote container registry must be part of your stack. ### Configuration - **Build Context**: By default, Kaniko uses the Kubernetes API to transfer the build context. To store it in an artifact store, set `store_context_in_artifact_store=True` and ensure a remote artifact store is configured. - **Pod Timeout**: Optionally adjust the timeout for the Kaniko pod using `pod_running_timeout`. ### Registering the Image Builder To register the Kaniko image builder: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= [ --pod_running_timeout= ] zenml stack register -i ... --set ``` ### Authentication The Kaniko build pod must authenticate to: - Push to the container registry. - Pull from private registries for parent images. - Read from the artifact store if configured. #### Cloud Provider Configurations 1. **AWS**: - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to the EKS node IAM role. - Register the image builder with required environment variables: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' ``` 2. **GCP**: - Enable workload identity and configure service accounts. - Register the image builder with the correct namespace and service account: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --kubernetes_namespace= \ --service_account_name= ``` 3. **Azure**: - Create a Kubernetes `configmap` for Docker config: ```shell kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' ``` - Register the image builder to mount the configmap: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' ``` ### Additional Parameters You can pass additional parameters to the Kaniko build using `executor_args`: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --executor_args='["--label", "key=value"]' ``` ### Common Flags - `--cache`: Disable caching (default: true). - `--cache-dir`: Directory for cached layers (default: `/cache`). - `--cache-repo`: Repository for cached layers (default: `gcr.io/kaniko-project/executor`). - `--cache-ttl`: Cache expiration time (default: `24h`). - `--cleanup`: Disable cleanup of the working directory (default: true). - `--compressed-caching`: Disable compressed caching (default: true). For a full list of flags, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). For more details, consult the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kaniko/#zenml.integrations.kaniko.image_builders.kaniko_image_builder.KanikoImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/aws.md === # AWS Image Builder with ZenML The AWS Image Builder is a component of the ZenML `aws` integration that utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) for building container images. ## When to Use Use the AWS Image Builder if: - You cannot install or use [Docker](https://www.docker.com) locally. - You are already using AWS. - Your stack includes AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or [SageMaker Orchestrator](../orchestrators/sagemaker.md). ## Deployment For a quick setup, consider using the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). ## Usage Requirements To use the AWS Image Builder, ensure you have: 1. ZenML `aws` integration installed: ```shell zenml integration install aws ``` 2. An [S3 Artifact Store](../artifact-stores/s3.md) for build context. 3. An optional [AWS container registry](../container-registries/aws.md) for pushing built images. 4. An [AWS CodeBuild project](https://aws.amazon.com/codebuild) set up in the appropriate region. ### CodeBuild Project Configuration Basic configuration values include: - **Source Type**: `Amazon S3` - **Bucket**: Same as the S3 Artifact Store. - **Environment Image**: `bentolor/docker-dind-awscli` - **Privileged Mode**: `false` Ensure the **Service Role** for CodeBuild has permissions for S3 and ECR (if applicable): ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "ecr:BatchGetImage", "ecr:PutImage" ], "Resource": "*" } ] } ``` ### Registering the Image Builder To register the image builder: ```shell zenml image-builder register \ --flavor=aws \ --code_build_project= zenml stack register -i ... --set ``` ## Authentication Methods Authentication is required to integrate the AWS Image Builder. Options include: ### Implicit Authentication Uses local AWS CLI credentials. Quick but not portable across environments. ### AWS Service Connector (Recommended) For better security and management, register an AWS Service Connector: ```shell zenml service-connector register --type aws -i ``` Or auto-configure: ```shell zenml service-connector register --type aws --resource-type aws-generic --auto-configure ``` Ensure the connector has permissions for CodeBuild: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "codebuild:StartBuild", "codebuild:BatchGetBuilds" ], "Resource": "arn:aws:codebuild:::project/" } ] } ``` After setting up the connector, register the image builder: ```shell zenml image-builder register \ --flavor=aws \ --code_build_project= \ --connector ``` ## Customizing AWS CodeBuild Builds You can customize the image builder with: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables. - `implicit_container_registry_auth`: Controls authentication method for the container registry. For best practices, consider copying the default Docker image to your own registry to avoid rate limits. ## Final Steps Use the AWS Image Builder in a ZenML Stack: ```shell zenml stack register -i ... --set ``` This summary provides essential details for utilizing the AWS Image Builder with ZenML, including setup, registration, authentication, and customization options. ================================================== === File: docs/book/component-guide/image-builders/custom.md === # Custom Image Builder Development in ZenML ## Overview This documentation provides guidance on developing a custom image builder in ZenML, focusing on the `BaseImageBuilder` abstract class, which serves as the foundation for creating Docker image builders. ### Base Abstraction The `BaseImageBuilder` class must be subclassed to create custom image builders. It provides a basic interface for building Docker images. ```python from abc import ABC, abstractmethod from typing import Any, Dict, Optional, Type from zenml.container_registries import BaseContainerRegistry from zenml.image_builders import BuildContext from zenml.stack import StackComponent class BaseImageBuilder(StackComponent, ABC): """Base class for ZenML image builders.""" @property def build_context_class(self) -> Type["BuildContext"]: """Returns the build context class.""" return BuildContext @abstractmethod def build(self, image_name: str, build_context: "BuildContext", docker_build_options: Dict[str, Any], container_registry: Optional["BaseContainerRegistry"] = None) -> str: """Builds a Docker image and optionally pushes it to a registry.""" ``` ### Steps to Create a Custom Image Builder 1. **Subclass `BaseImageBuilder`:** Implement the `build` method to define how the Docker image is built. 2. **Configuration Class:** Create a class inheriting from `BaseImageBuilderConfig` to add configuration parameters. 3. **Flavor Registration:** Inherit from `BaseImageBuilderFlavor`, providing a `name` for the flavor. Register it via CLI: ```shell zenml image-builder flavor register ``` Example registration: ```shell zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` ### Important Considerations - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, list available flavors: ```shell zenml image-builder flavor list ``` ### Workflow Integration - **Flavor Class:** Used during flavor creation via CLI. - **Config Class:** Validates user input during stack component registration. - **Image Builder Class:** Engaged when the component is in use, allowing separation of flavor configuration from implementation. ### Custom Build Context If a different build context is needed, subclass `BuildContext` and override the `build_context_class` property in your image builder. This documentation provides a concise guide to creating and integrating custom image builders in ZenML, ensuring that critical technical details are preserved for effective implementation. ================================================== === File: docs/book/component-guide/image-builders/image-builders.md === ### Image Builders in ZenML **Overview**: The image builder is crucial for building container images in remote MLOps stacks, enabling the execution of machine-learning pipelines in various environments. **When to Use**: The image builder is necessary when components of your stack require container images, particularly for ZenML's remote orchestrators, step operators, and some model deployers. **Image Builder Flavors**: ZenML provides several image builder options: | Image Builder | Flavor | Integration | Notes | |-----------------------|----------|-------------|-----------------------------------------| | [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | | [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | | [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build for images. | | [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build for images. | | [Custom Implementation](custom.md) | _custom_ | | Allows custom image builder implementations. | To view available image builder flavors, use: ```shell zenml image-builder flavor list ``` **Usage**: You do not need to interact directly with the image builder in your code. As long as the desired image builder is part of your active ZenML stack, it will be automatically utilized by any component that requires container image building. ================================================== === File: docs/book/component-guide/experiment-trackers/wandb.md === # Weights & Biases Integration with ZenML ## Overview The Weights & Biases (W&B) Experiment Tracker is a ZenML integration that allows logging and visualizing pipeline information (models, parameters, metrics) using the W&B platform. It is ideal for iterative ML experimentation and can also be used for automated pipeline runs. ## When to Use Use the W&B Experiment Tracker if: - You are already using W&B for tracking and want to integrate it into your ZenML MLOps workflows. - You prefer a visually interactive way to navigate results from ZenML pipelines. - You want to share logged artifacts and metrics with your team or stakeholders. Consider other Experiment Tracker flavors if you are unfamiliar with W&B. ## Deployment To deploy the W&B Experiment Tracker, install the integration: ```shell zenml integration install wandb -y ``` ### Authentication Methods Configure the following credentials for W&B: - `api_key`: Required API key for your W&B account. - `project_name`: Name of the project for the new run; defaults to "Uncategorized" if not specified. - `entity`: Username or team name for sending runs; defaults to your username if not specified. #### Basic Authentication (Not Recommended for Production) ```shell zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \ --entity= --project_name= --api_key= zenml stack register custom_stack -e wandb_experiment_tracker ... --set ``` #### ZenML Secret (Recommended) Create a ZenML secret to store credentials securely: ```shell zenml secret create wandb_secret \ --entity= \ --project_name= \ --api_key= ``` Then register the tracker: ```shell zenml experiment-tracker register wandb_tracker \ --flavor=wandb \ --entity={{wandb_secret.entity}} \ --project_name={{wandb_secret.project_name}} \ --api_key={{wandb_secret.api_key}} ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator: ```python import wandb from wandb.integration.keras import WandbCallback @step(experiment_tracker="") def tf_trainer(...): ... model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"": metric}) ``` Alternatively, use the Client to dynamically reference the active stack's experiment tracker: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` ### W&B UI Each ZenML step using W&B creates a separate experiment run, viewable in the W&B UI. Access the tracking URL via the step's metadata: ```python from zenml.client import Client tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ### Additional Configuration You can customize the W&B experiment tracker by passing `WandbExperimentTrackerSettings`: ```python from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings wandb_settings = WandbExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": wandb_settings}) def my_step(...): ... ``` ## Full Code Example Here’s a complete example using the W&B integration with ZenML: ```python from zenml import pipeline, step from zenml.client import Client from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset import wandb experiment_tracker = Client().active_stack.experiment_tracker @step def prepare_data(): dataset = load_dataset("imdb") ... return train_dataset, eval_dataset @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) training_args = TrainingArguments(...) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": trainer.evaluate()}) @pipeline(enable_cache=False) def fine_tuning_pipeline(): train_dataset, eval_dataset = prepare_data() train_model(train_dataset, eval_dataset) if __name__ == "__main__": fine_tuning_pipeline()() ``` For further details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/experiment-trackers/vertexai.md === # Vertex AI Experiment Tracker Overview The Vertex AI Experiment Tracker is a component of the ZenML framework that integrates with Google Cloud's Vertex AI to log and visualize experiment data from machine learning pipelines. It is particularly useful during the iterative ML experimentation phase and can also track results from automated pipeline runs. ## Use Cases - Continuation of experiment tracking within Vertex AI for existing projects transitioning to MLOps with ZenML. - Enhanced visualization of ZenML pipeline results (models, metrics, datasets). - Integration with Google Cloud services for those building ML workflows in the GCP ecosystem. ## Configuration To use the Vertex AI Experiment Tracker, install the GCP integration: ```shell zenml integration install gcp -y ``` ### Configuration Options Key configuration options for the tracker include: - `project`: GCP project name (inferred if None). - `location`: GCP location for experiments (default: us-central1). - `staging_bucket`: GCS bucket for staging artifacts (format: gs://...). - `service_account_path`: Path to service account JSON for authentication. Register the tracker: ```shell zenml experiment-tracker register vertex_experiment_tracker \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` ### Authentication Methods 1. **Implicit Authentication**: Quick local setup using `gcloud` CLI. Not recommended for production. 2. **GCP Service Connector** (recommended): Use for better security and configuration management. Register a GCP Service Connector: ```shell zenml service-connector register --type gcp -i ``` Register the tracker with the connector: ```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml experiment-tracker connect --connector ``` 3. **GCP Credentials**: Use a service account key stored in a ZenML secret for authentication. Register the tracker with the service account: ```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// \ --service_account_path=path/to/service_account_key.json ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator. ### Example 1: Logging Metrics Use built-in methods to log metrics: ```python from google.cloud import aiplatform class VertexAICallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): metrics = {key: value for key, value in (logs or {}).items() if isinstance(value, (int, float))} aiplatform.log_time_series_metrics(metrics=metrics, step=epoch) @step(experiment_tracker="") def train_model(...): aiplatform.autolog() model.fit(..., callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) aiplatform.log_params(...) ``` ### Example 2: Uploading TensorBoard Logs Integrate TensorBoard for detailed visualizations: ```python @step(experiment_tracker="") def train_model(...): aiplatform.start_upload_tb_log(...) model.fit(...) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) aiplatform.log_params(...) ``` ### Dynamic Tracker Usage Instead of hardcoding the tracker name, use the ZenML Client: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` ### Accessing Experiment Tracker UI Retrieve the URL for the experiment linked to a ZenML run: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ### Additional Configuration Use `VertexExperimentTrackerSettings` for advanced configurations like specifying an experiment name or TensorBoard instance: ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings vertexai_settings = VertexExperimentTrackerSettings( experiment="", experiment_tensorboard="TENSORBOARD_RESOURCE_NAME" ) @step(experiment_tracker="", settings={"experiment_tracker": vertexai_settings}) def step_one(data: np.ndarray): ... ``` For further details on configuration, refer to the ZenML documentation. ================================================== === File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === # ZenML Experiment Trackers ## Overview Experiment Trackers in ZenML allow users to log detailed information about ML experiments, including models, datasets, and metrics. Each pipeline run is treated as an experiment, and results are stored through Experiment Tracker stack components, linking pipeline runs to experiments. ### Key Concepts - **Experiment Tracker**: An optional stack component registered in your ZenML stack. - **Artifact Store**: Mandatory component that records artifact information circulated through pipelines. ### When to Use Experiment Trackers enhance usability by providing a visual interface for browsing and visualizing logged information, making them preferable when you need intuitive interaction with experiment data. ### Architecture Experiment Trackers integrate into the ZenML stack, as shown in the architecture diagram. ### Available Flavors ZenML supports various Experiment Tracker integrations: | Tracker | Flavor | Integration | Notes | |---------|--------|-------------|-------| | [Comet](comet.md) | `comet` | `comet` | Adds Comet tracking capabilities | | [MLflow](mlflow.md) | `mlflow` | `mlflow` | Adds MLflow tracking capabilities | | [Neptune](neptune.md) | `neptune` | `neptune` | Adds Neptune tracking capabilities | | [Weights & Biases](wandb.md) | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | | [Custom Implementation](custom.md) | _custom_ | | Custom tracking options | To list available flavors, use: ```shell zenml experiment-tracker flavor list ``` ### Usage Steps 1. **Configure and Add**: Add an Experiment Tracker to your ZenML stack. 2. **Enable for Steps**: Decorate individual pipeline steps to enable the Experiment Tracker. 3. **Log Information**: Explicitly log models, metrics, and data within your steps. 4. **Access UI**: Retrieve the Experiment Tracker UI URL for a specific step: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") step = pipeline_run.steps[""] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` ### Notes - Experiment trackers automatically mark runs as failed if the corresponding ZenML pipeline step fails. - Refer to the specific documentation for each Experiment Tracker flavor for detailed usage instructions. ================================================== === File: docs/book/component-guide/experiment-trackers/neptune.md === # Neptune Experiment Tracker with ZenML The Neptune Experiment Tracker integrates with [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize pipeline step information (models, parameters, metrics) during ML experimentation. ## Use Cases Utilize the Neptune Experiment Tracker if: - You are already using neptune.ai and want to integrate it with ZenML. - You prefer a visual interface for navigating ZenML pipeline results. - You wish to share logged artifacts and metrics with your team or stakeholders. If you are new to neptune.ai, consider using another Experiment Tracker flavor. ## Deployment To deploy the Neptune Experiment Tracker, install the integration: ```shell zenml integration install neptune -y ``` ### Authentication Configure the following credentials: - `api_token`: Your Neptune API key (create a free account [here](https://app.neptune.ai/register)). - `project`: The project name in the format "workspace-name/project-name". #### Recommended: ZenML Secret Store credentials securely using a ZenML secret: ```shell zenml secret create neptune_secret --api_token= ``` Then, register the experiment tracker: ```shell zenml experiment-tracker register neptune_experiment_tracker \ --flavor=neptune \ --project= \ --api_token={{neptune_secret.api_token}} ``` #### Basic Authentication (Not Recommended) Directly configure credentials (not secure): ```shell zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ --project= --api_token= ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and fetch the Neptune run object: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step from sklearn.svm import SVC from sklearn.datasets import load_iris from zenml.client import Client @step(experiment_tracker="neptune_experiment_tracker") def train_model() -> SVC: iris = load_iris() model = SVC(kernel="rbf", C=1.0) model.fit(iris.data, iris.target) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model ``` ### Logging Metadata Use `get_step_context` to log ZenML metadata: ```python @step(experiment_tracker="neptune_tracker") def my_step(): neptune_run = get_neptune_run() context = get_step_context() neptune_run["pipeline_metadata"] = context.pipeline_run.get_metadata().dict() neptune_run[f"step_metadata/{context.step_name}"] = context.step_run.get_metadata().dict() ``` ### Adding Tags Use `NeptuneExperimentTrackerSettings` to add tags: ```python from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) @step(experiment_tracker="", settings={"experiment_tracker": neptune_settings}) def my_step(): ... ``` ## Neptune UI Access a web-based UI to view tracked experiments. The URL for the Neptune run is printed in the console when a run is initialized. Each pipeline run is logged as a separate experiment in Neptune. ## Full Code Example Here’s a complete example of using the Neptune integration with ZenML: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step, pipeline from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.datasets import load_iris from zenml.client import Client @step(experiment_tracker="neptune_experiment_tracker") def train_model() -> SVC: iris = load_iris() X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) model = SVC(kernel="rbf", C=1.0) model.fit(X_train, y_train) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model @step(experiment_tracker="neptune_experiment_tracker") def evaluate_model(model: SVC): iris = load_iris() _, X_test, _, y_test = train_test_split(iris.data, iris.target, test_size=0.2) accuracy = model.score(X_test, y_test) neptune_run = get_neptune_run() neptune_run["metrics/accuracy"] = accuracy return accuracy @pipeline def ml_pipeline(): model = train_model() evaluate_model(model) if __name__ == "__main__": ml_pipeline() ``` ## Further Reading For more details, check [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/). ================================================== === File: docs/book/component-guide/experiment-trackers/custom.md === # Custom Experiment Tracker Development in ZenML ## Overview This documentation outlines the process for developing a custom experiment tracker in ZenML. For the latest updates, refer to the [current ZenML documentation](https://docs.zenml.io). ## Prerequisites Familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) in ZenML. ## Important Notes - The base abstraction for the Experiment Tracker is under development. Avoid extending it until the release. - You can use existing flavors or implement your own, but be prepared for potential refactoring later. ## Steps to Create a Custom Experiment Tracker 1. **Create a Tracker Class**: Inherit from `BaseExperimentTracker` and implement the required abstract methods. 2. **Configuration Class**: Inherit from `BaseExperimentTrackerConfig` to define configuration parameters. 3. **Combine Classes**: Inherit from `BaseExperimentTrackerFlavor` to integrate the implementation and configuration. ### Registration Register your custom flavor using the CLI with the following command, ensuring to use dot notation: ```shell zenml experiment-tracker flavor register ``` For example, if your flavor class is in `flavors/my_flavor.py`: ```shell zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor ``` ### Best Practices - Initialize ZenML at the root of your repository using `zenml init` to ensure proper resolution of the flavor class. ### Verification Check the list of available flavors: ```shell zenml experiment-tracker flavor list ``` ## Class Interaction - **CustomExperimentTrackerFlavor**: Used during flavor creation via CLI. - **CustomExperimentTrackerConfig**: Validates user input during stack component registration. - **CustomExperimentTracker**: Engaged when the component is in use, allowing separation of configuration from implementation. This structure enables registration of flavors and components even if their dependencies are not installed locally. ================================================== === File: docs/book/component-guide/experiment-trackers/mlflow.md === # MLflow Experiment Tracker with ZenML The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service for logging and visualizing pipeline step data (models, parameters, metrics). ## Use Cases Use the MLflow Experiment Tracker if: - You are already using MLflow for experiment tracking and want to integrate it with ZenML. - You seek a visually interactive way to navigate results from ZenML pipeline runs. - Your team has a shared MLflow Tracking service and you want to connect ZenML to it. If unfamiliar with MLflow, consider other Experiment Tracker flavors. ## Configuration To configure the MLflow Experiment Tracker, install the integration: ```shell zenml integration install mlflow -y ``` ### Deployment Scenarios 1. **Localhost (default)**: Requires a local Artifact Store. Suitable for local runs only. ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` 2. **Remote Tracking Server**: Requires a deployed MLflow Tracking Server with authentication parameters. 3. **Databricks**: Requires a Databricks workspace and authentication parameters. ### Authentication Methods Configure credentials for a remote MLflow tracking server: - `tracking_uri`: URL of the MLflow server (use `"databricks"` for Databricks). - `tracking_username`/`tracking_password` or `tracking_token`. - `tracking_insecure_tls` (optional). - `databricks_host`: Required if using Databricks. #### Basic Authentication Not recommended for production due to security concerns: ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ --tracking_uri= --tracking_token= ``` #### ZenML Secret (Recommended) Store credentials securely: ```shell zenml secret create mlflow_secret --username= --password= ``` Then reference the secret: ```shell zenml experiment-tracker register mlflow --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} ... ``` ## Usage To log information in a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use MLflow's logging capabilities: ```python import mlflow @step(experiment_tracker="") def tf_trainer(x_train, y_train): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) return model ``` ### MLflow UI Access the MLflow UI for experiment details. Get the tracking URL from the step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` For local MLflow, start the UI with: ```bash mlflow ui --backend-store-uri ``` ### Additional Configuration Use `MLFlowExperimentTrackerSettings` for nested runs or tags: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) def step_one(data): ... ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor.MLFlowExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/experiment-trackers/comet.md === # Comet Experiment Tracker with ZenML The Comet Experiment Tracker integrates with ZenML to log and visualize pipeline information using the Comet platform. It is useful for tracking ML experiments and can also be adapted for automated pipeline runs. ## When to Use Comet - If you are already using Comet for tracking and want to continue with ZenML. - If you prefer a visually interactive way to navigate results from ZenML pipelines. - If you need to share logged artifacts and metrics with your team or stakeholders. ## Deployment To deploy the Comet Experiment Tracker, install the integration: ```bash zenml integration install comet -y ``` ### Authentication Methods 1. **ZenML Secret (Recommended)**: Store credentials securely. ```bash zenml secret create comet_secret \ --workspace= \ --project_name= \ --api_key= ``` Register the tracker: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} ``` 2. **Basic Authentication**: Directly configure credentials (not recommended for production). ```bash zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ --workspace= --project_name= --api_key= ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def my_step(): experiment_tracker.log_metrics({"my_metric": 42}) experiment_tracker.experiment.log_model(...) ``` ### Comet UI Each ZenML step using Comet creates a separate experiment viewable in the Comet UI. The experiment URL can be accessed via step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ## Full Code Example Here is a simplified example of a ZenML pipeline using Comet: ```python from zenml import pipeline, step from zenml.client import Client from zenml.integrations.comet.experiment_trackers import CometExperimentTracker experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): # Load data logic return X, y @step(experiment_tracker=experiment_tracker.name) def train_model(X_train, y_train): model.fit(X_train, y_train) experiment_tracker.experiment.log_model(...) return model @pipeline def iris_classification_pipeline(): X, y = load_data() model = train_model(X_train, y_train) if __name__ == "__main__": iris_classification_pipeline()() ``` ## Additional Configuration You can pass `CometExperimentTrackerSettings` for additional tags and configurations: ```python comet_settings = CometExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": comet_settings}) def my_step(): ... ``` For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/model-registries/model-registries.md === # Model Registries Model registries are centralized storage solutions for managing and tracking machine learning models throughout their development and deployment stages. They facilitate version control and reproducibility by storing metadata like version, configuration, and metrics. In ZenML, model registries are Stack Components that simplify the retrieval, loading, and deployment of trained models, while also providing information on the training pipeline and reproduction methods. ### Key Concepts - **RegisteredModel**: A logical grouping of models to track different versions, including metadata such as name, description, and tags. It can be user-created or auto-generated when a new model is logged. - **RegistryModelVersion**: A specific model version identified by a unique version number. It includes metadata like name, description, tags, metrics, and references to the model artifact, pipeline name, pipeline run ID, and step name. - **ModelVersionStage**: Represents the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. This tracks the lifecycle of a model version. ### Usage ZenML's Artifact Store manages pipeline artifacts programmatically, but model registries provide a visual interface for managing model metadata, especially with remote orchestrators. They are ideal for centralizing model state management and facilitating easy retrieval and deployment. ### Integration in ZenML Stack Model registries are optional components integrated with experiment trackers. To use a model registry, it must match the flavor of the experiment tracker. If you are not using an experiment tracker, models can still be stored in ZenML, but retrieval must be manual. #### Model Registry Flavors | Model Registry | Flavor | Integration | Notes | |----------------|--------|-------------|-------| | [MLflow](mlflow.md) | `mlflow` | `mlflow` | Add MLflow as Model Registry to your stack | | [Custom Implementation](custom.md) | _custom_ | | _custom_ | To view available flavors, use: ```shell zenml model-registry flavor list ``` ### Registration Methods To register a model in the model registry, you can use: 1. Built-in step in the pipeline. 2. ZenML CLI for command-line registration. 3. Model registry UI for registration. After registration, models can be retrieved and loaded for deployment or further experimentation. ================================================== === File: docs/book/component-guide/model-registries/custom.md === ### Summary: Developing a Custom Model Registry in ZenML This documentation outlines the process for creating a custom model registry in ZenML. It is crucial to understand the general concepts of ZenML's component flavors before diving into specifics. #### Base Abstraction The `BaseModelRegistry` class serves as the abstract base for custom model registries, providing a generic interface for model registration and retrieval. Key methods include: - **Model Registration Methods**: - `register_model(name, description, tags)`: Registers a model. - `delete_model(name)`: Deletes a registered model. - `update_model(name, description, tags)`: Updates a registered model. - `get_model(name)`: Retrieves a registered model. - `list_models(name, tags)`: Lists all registered models. - **Model Version Methods**: - `register_model_version(name, description, tags, model_source_uri, version, metadata, ...)`: Registers a model version. - `delete_model_version(name, version)`: Deletes a model version. - `update_model_version(name, version, description, tags, stage)`: Updates a model version. - `list_model_versions(name, model_source_uri, tags, ...)`: Lists all model versions for a registered model. - `get_model_version(name, version)`: Retrieves a model version. - `load_model_version(name, version, ...)`: Loads a model version. - `get_model_uri_artifact_store(model_version)`: Gets the URI artifact store for a model version. #### Steps to Build a Custom Model Registry 1. Familiarize yourself with core model registry concepts. 2. Create a class inheriting from `BaseModelRegistry` and implement the abstract methods. 3. Define a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig` for additional parameters. 4. Combine the implementation and configuration by inheriting from `BaseModelRegistryFlavor`. To register your custom model registry, use the CLI command: ```shell zenml model-registry flavor register ``` #### Important Notes - The `CustomModelRegistryFlavor` is utilized during flavor creation via CLI. - The `CustomModelRegistryConfig` is used for validating user inputs during registration. - The `CustomModelRegistry` is invoked when the component is in use, separating configuration from implementation. For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). This documentation is subject to updates as the model registry component evolves. For any issues or feedback, contact the ZenML team via [Slack](https://zenml.io/slack) or GitHub. ================================================== === File: docs/book/component-guide/model-registries/mlflow.md === # Managing MLFlow Logged Models and Artifacts ## Overview MLflow is a tool for tracking experiments, managing models, and deploying them. ZenML integrates with MLflow, providing an Experiment Tracker and Model Deployer. The MLflow model registry helps manage and track ML models and artifacts, offering a user interface for browsing. ## Use Cases - Track different model versions during development and deployment. - Monitor model performance across environments. - Simplify model deployment to production or staging. ## Deployment To use the MLflow model registry, install the MLflow integration: ```shell zenml integration install mlflow -y ``` Register the MLflow model registry component: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow zenml stack register custom_stack -r mlflow_model_registry ... --set ``` **Note:** The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability. ## Usage ### Register Models in a Pipeline Use the `mlflow_register_model_step` to register a model logged to MLflow: ```python from zenml import pipeline from zenml.integrations.mlflow.steps.mlflow_registry import mlflow_register_model_step @pipeline def mlflow_registry_training_pipeline(): model = ... mlflow_register_model_step(model=model, name="tensorflow-mnist-model") ``` **Parameters:** - `name`: Required model name. - `version`: Model version. - `trained_model_name`: Name of the model artifact in MLflow. - `model_source_uri`: Path to the model. - `description`: Model version description. - `metadata`: Metadata associated with the model version. ### Register Models via CLI To manually register models, use: ```shell zenml model-registry models register-version Tensorflow-model \ --description="A new version with accuracy 98.88%" \ -v 1 \ --model-uri="file:///.../mlruns/.../artifacts/model" \ -m key1 value1 -m key2 value2 \ --zenml-pipeline-name="mlflow_training_pipeline" \ --zenml-step-name="trainer" ``` ### Interact with Registered Models List all registered models: ```shell zenml model-registry models list ``` List versions of a specific model: ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` Get details of a specific model version: ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` ### Deleting Models To delete a registered model or a specific version: ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` For more details, refer to the [ZenML MLFlow SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/orchestrators/local-docker.md === # Local Docker Orchestrator in ZenML The Local Docker orchestrator is a built-in feature of ZenML that allows you to run pipelines locally in isolated Docker environments. ### When to Use - For local execution of pipeline steps in isolated environments. - For debugging pipeline issues without incurring costs of remote infrastructure. ### Deployment Ensure Docker is installed and running. ### Usage To register and activate the local Docker orchestrator in your stack, use the following commands: ```shell zenml orchestrator register --flavor=local_docker zenml stack register -o ... --set ``` Run your ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Additional Configuration You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) details. For example, to specify the CPU count (Windows only): ```python from zenml import step, pipeline from zenml.orchestrators.local_docker.local_docker_orchestrator import LocalDockerOrchestratorSettings @step def return_one() -> int: return 1 settings = { "orchestrator": LocalDockerOrchestratorSettings( run_args={"cpu_count": 3} ) } @pipeline(settings=settings) def simple_pipeline(): return_one() ``` ### Enabling CUDA for GPU To run steps on a GPU, follow the instructions in the [GPU training guide](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for optimal performance. ================================================== === File: docs/book/component-guide/orchestrators/lightning.md === ### Summary: Orchestrating Pipelines on Lightning AI with ZenML **Overview**: The Lightning AI orchestrator integrates with ZenML to run pipelines on Lightning AI's infrastructure, utilizing its scalable compute resources and managed environment. This integration is intended for remote ZenML deployments only. **When to Use**: - For quick execution of pipelines on GPU instances. - If already using Lightning AI for machine learning projects. - To leverage managed infrastructure for ML workflows. - To benefit from Lightning AI's optimizations. **Deployment Requirements**: - A Lightning AI account with credentials. - No additional infrastructure deployment is needed. **Operational Workflow**: 1. ZenML archives the current repository and uploads it to Lightning AI Studio. 2. Using `lightning-sdk`, ZenML creates a new studio and uploads the code. 3. Commands are executed via `studio.run()` to prepare the environment. 4. Pipelines can run in both CPU and GPU modes. **Installation**: To install the Lightning integration, run: ```shell zenml integration install lightning ``` **Credentials Needed**: - `LIGHTNING_USER_ID` - `LIGHTNING_API_KEY` - Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` **Setting Up Credentials**: Retrieve credentials from your Lightning AI account under "Global Settings" > "Keys". Register the orchestrator with: ```shell zenml orchestrator register lightning_orchestrator \ --flavor=lightning \ --user_id= \ --api_key= \ --username= \ # optional --teamspace= \ # optional --organization= # optional ``` **Registering and Activating Stack**: ```bash zenml stack register lightning_stack -o lightning_orchestrator ... --set ``` **Pipeline Configuration**: Use `LightningOrchestratorSettings` to configure the orchestrator: ```python from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings lightning_settings = LightningOrchestratorSettings( main_studio_name="my_studio", machine_type="cpu", async_mode=True, custom_commands=["pip install -r requirements.txt"] ) @pipeline(settings={"orchestrator.lightning": lightning_settings}) def my_pipeline(): ... ``` **Running a Pipeline**: Execute the pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` **Monitoring**: Use Lightning AI's UI to monitor applications. Retrieve the UI URL for a pipeline run with: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` **Additional Configuration**: Settings can be specified at both pipeline and step levels. For GPU usage, set the machine type accordingly: ```python lightning_settings = LightningOrchestratorSettings( machine_type="gpu" # or specific types like `A10G` ) ``` Refer to [Lightning AI's documentation](https://lightning.ai/docs/overview/studios/change-gpus) for available GPU types. This summary captures the essential details for using the Lightning AI orchestrator with ZenML, ensuring clarity and conciseness while retaining critical information. ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === ### HyperAI Orchestrator Overview The HyperAI orchestrator allows for the deployment of ZenML pipelines on HyperAI instances, a cloud compute platform for AI. It is intended for use in remote ZenML deployments only. #### When to Use - For managed pipeline execution. - If you are a HyperAI customer. #### Prerequisites 1. A running HyperAI instance with internet accessibility and SSH key-based access. 2. Recent Docker version with Docker Compose. 3. NVIDIA Driver installed (optional but required for GPU usage). 4. NVIDIA Container Toolkit installed (optional for GPU usage). #### Functionality The orchestrator utilizes Docker Compose to create and execute a Docker Compose file for ZenML pipelines. Each pipeline step corresponds to a service in the file, using the `service_completed_successfully` condition to manage execution order. It can connect to a container registry for Docker image transfers. #### Scheduled Pipelines Supports: - **Cron expressions** (`cron_expression`) for periodic runs (requires `crontab`). - **Scheduled runs** (`run_once_start_time`) for one-time executions (requires `at`). #### Deployment Steps 1. **Configure HyperAI Service Connector**: ```shell zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= ``` 2. **Register the Orchestrator**: ```shell zenml orchestrator register --flavor=hyperai zenml stack register -o ... --set ``` 3. **Run a ZenML Pipeline**: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### GPU Usage For GPU-backed hardware, follow specific instructions to enable CUDA for optimal performance. For more details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/orchestrators/airflow.md === ### Airflow Orchestrator Overview ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step operates in a separate Docker container managed by Airflow. #### When to Use Airflow Orchestrator - Proven production-grade orchestrator. - Existing use of Airflow. - Local pipeline execution. - Willingness to deploy and maintain Airflow. #### Deployment Options - **Local Deployment**: No additional setup required. - **Remote Deployment**: Requires a remote ZenML deployment. Options include: - ZenML GCP Terraform module with Google Cloud Composer. - Managed services like Google Cloud Composer, Amazon MWAA, or Astronomer. - Manual Airflow deployment (refer to official Airflow docs). **Python Packages Required**: - `pydantic~=2.7.1`: For parsing and validating configuration files. - `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes`: Depending on the operator used. #### Setup Instructions 1. Install ZenML Airflow integration: ```shell zenml integration install airflow ``` 2. Ensure Docker is installed and running. 3. Register the orchestrator: ```shell zenml orchestrator register --flavor=airflow --local=True zenml stack register -o ... --set ``` #### Local Deployment Steps 1. Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` 2. Set environment variables (optional): - `AIRFLOW_HOME`: Default is `~/airflow`. - `AIRFLOW__CORE__DAGS_FOLDER`: Default is `/dags`. - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default is 30 seconds. For MacOS, set: ```bash export no_proxy=* ``` 3. Start the Airflow server: ```bash airflow standalone ``` 4. Run the ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` 5. Copy the generated `.zip` file to the Airflow DAGs directory or configure ZenML to do so automatically: ```bash zenml orchestrator update --dag_output_dir= ``` #### Remote Deployment Considerations - Requires a remote ZenML server, deployed Airflow server, remote artifact store, and remote container registry. - Running a pipeline creates a `.zip` file for Airflow, which must be placed in the DAGs directory. #### Scheduling Pipelines Schedule pipeline runs with Airflow: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule scheduled_pipeline = fashion_mnist_pipeline.with_options( schedule=Schedule( start_time=datetime.now() - timedelta(hours=1), end_time=datetime.now() + timedelta(hours=1), interval_second=timedelta(minutes=15), catchup=False, ) ) scheduled_pipeline() ``` #### Airflow UI Access the UI at [http://localhost:8080](http://localhost:8080). Default credentials: username `admin`, password in `/standalone_admin_password.txt`. #### Additional Configuration Use `AirflowOrchestratorSettings` for further configuration when defining or running pipelines. #### GPU Support Follow specific instructions to enable CUDA for GPU acceleration. #### Using Different Airflow Operators - **DockerOperator**: For local execution. - **KubernetesPodOperator**: For execution in a Kubernetes cluster. Specify the operator: ```python from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( operator="docker", # or "kubernetes_pod" operator_args={} ) ``` #### Custom Operators and DAG Generators For custom operators, specify the operator path. To customize DAG generation, provide a custom DAG generator file that matches the original structure. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/sagemaker.md === # AWS SageMaker Orchestrator Documentation Summary ## Overview The ZenML SageMaker orchestrator integrates with [SageMaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) to facilitate serverless ML workflows on AWS. It provides a production-ready, repeatable cloud orchestrator with minimal setup. **Warning:** This component is designed for remote ZenML deployments; local deployments may cause unexpected behavior. ## When to Use Use the SageMaker orchestrator if: - You are using AWS. - You need a production-grade orchestrator with a UI for tracking pipeline runs. - You prefer a managed, serverless solution for running pipelines. ## Functionality The SageMaker orchestrator creates a `PipelineStep` for each ZenML pipeline step, currently supporting only SageMaker Processing jobs. ## Deployment Requirements 1. Deploy ZenML to the cloud, ideally in the same region as SageMaker. 2. Ensure connection to the remote ZenML server. 3. Enable relevant IAM permissions, including `AmazonSageMakerFullAccess`. ## Installation Install the necessary integrations: ```shell zenml integration install aws s3 ``` Ensure Docker is installed and running, and set up a remote artifact store and container registry. ## Authentication Methods ### Service Connector (Recommended) ```shell zenml service-connector register --type aws -i zenml orchestrator register --flavor=sagemaker --execution_role= zenml orchestrator connect --connector zenml stack register -o ... --set ``` ### Explicit Authentication ```shell zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... zenml stack register -o ... --set ``` ### Implicit Authentication ```shell zenml orchestrator register --flavor=sagemaker --execution_role= python run.py # Uses `default` profile in `~/.aws/config` ``` ## Running Pipelines Run any ZenML pipeline using the SageMaker orchestrator: ```shell python run.py ``` Output will indicate the status of the pipeline run. ## SageMaker UI Access the SageMaker Pipelines UI via SageMaker Studio to view logs and details of pipeline runs. ## Debugging If a pipeline fails before starting, check the SageMaker UI for error messages and logs. For detailed logs, use Amazon CloudWatch. ## Scheduling Currently, the SageMaker orchestrator does not support scheduled pipeline runs. ## Configuration You can provide additional configuration at the pipeline or step level using `SagemakerOrchestratorSettings`. Example: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(instance_type="ml.m5.large", volume_size_in_gb=30) ``` Apply settings to a step: ```python @step(settings={"orchestrator": sagemaker_orchestrator_settings}) ``` ## Warm Pools Enable Warm Pools to reduce startup time: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) ``` Disable Warm Pools: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=None) ``` ## S3 Data Access ### Import Data from S3 ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(input_data_s3_mode="File", input_data_s3_uri="s3://some-bucket-name/folder") ``` ### Export Data to S3 ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(output_data_s3_mode="EndOfJob", output_data_s3_uri="s3://some-results-bucket-name/results") ``` ## Tagging Add tags to pipeline executions and jobs: ```python pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project", "environment": "production"}) ``` ## GPU Support Follow specific instructions to enable CUDA for GPU-backed hardware when using the orchestrator. For further details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/component-guide/orchestrators/local.md === # Local Orchestrator in ZenML The local orchestrator is a built-in feature of ZenML that allows you to run pipelines locally without additional setup. ### When to Use - Ideal for beginners starting with ZenML. - Suitable for quick experimentation and debugging of new pipelines. ### Deployment The local orchestrator is included with ZenML and requires no extra installation. ### Usage To register and activate the local orchestrator in your stack, use the following commands: ```shell zenml orchestrator register --flavor=local zenml stack register -o ... --set ``` You can run any ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` For detailed attributes and configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/kubernetes.md === ### Kubernetes Orchestrator Overview The ZenML `kubernetes` integration allows orchestration and scaling of ML pipelines on Kubernetes clusters without needing Kubernetes code. It serves as a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, executing each pipeline step in separate Kubernetes pods managed by a master pod through topological sorting. This approach is faster and simpler than using Kubeflow, making it suitable for teams new to distributed orchestration. ### When to Use Use the Kubernetes orchestrator if you: - Want a lightweight solution for running pipelines on Kubernetes. - Prefer not to maintain Kubeflow Pipelines. - Are not interested in managed solutions like Vertex. ### Deployment Requirements To deploy the Kubernetes orchestrator, you need: - A Kubernetes cluster (refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for deployment options). - A remote ZenML server connected to the cluster. ### Usage Steps 1. **Install the ZenML Kubernetes Integration:** ```shell zenml integration install kubernetes ``` 2. **Ensure the following are installed:** - Docker - kubectl - A remote artifact store and container registry as part of your stack. 3. **Register the Orchestrator:** - **With Service Connector:** ```shell zenml orchestrator register --flavor kubernetes zenml orchestrator connect --connector zenml stack register -o ... --set ``` - **Without Service Connector:** ```shell zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` 4. **Run a ZenML Pipeline:** ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Interacting with Pods You can interact with Kubernetes pods using labels for debugging: ```shell kubectl delete pod -n zenml -l pipeline= ``` ### Additional Configuration - **Default Namespace:** The orchestrator uses the `zenml` namespace by default, creating a service account called `zenml-service-account`. - **Custom Settings:** - `kubernetes_namespace`: Specify an existing namespace. - `service_account_name`: Use an existing service account with appropriate RBAC roles. ### Pod and Orchestrator Settings You can customize pod settings using `KubernetesOrchestratorSettings`: ```python from zenml.integrations.kubernetes.flavors.kubernetes_orchestrator_flavor import KubernetesOrchestratorSettings kubernetes_settings = KubernetesOrchestratorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, "limits": {"cpu": "4", "memory": "8Gi"} }, "labels": {"app": "ml-pipeline"} }, orchestrator_pod_settings={ "resources": { "requests": {"cpu": "1", "memory": "2Gi"}, "limits": {"cpu": "2", "memory": "4Gi"} }, "labels": {"app": "zenml-orchestrator"} }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) ``` ### Step-Level Configuration You can define settings at the step level to override pipeline settings: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: ... ``` ### GPU Configuration For GPU usage, follow specific instructions to enable CUDA for full acceleration. For further details on settings and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/orchestrators.md === # Orchestrators in ZenML ## Overview The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps run only when all required inputs are available. ### Key Features - **Artifact Storage**: The orchestrator stores all artifacts produced by pipeline runs. - **Docker Integration**: Many remote orchestrators build Docker images to execute pipeline code. ## When to Use The orchestrator is mandatory in ZenML stacks and must be configured for all pipelines. ## Available Orchestrator Flavors ZenML provides various orchestrators, including: | Orchestrator | Flavor | Integration | Notes | |-------------------------------|-----------------|--------------|--------------------------------------------| | [LocalOrchestrator](local.md) | `local` | _built-in_ | Runs pipelines locally. | | [LocalDockerOrchestrator](local-docker.md) | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | | [KubernetesOrchestrator](kubernetes.md) | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes clusters. | | [KubeflowOrchestrator](kubeflow.md) | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | | [VertexOrchestrator](vertex.md) | `vertex` | `gcp` | Runs pipelines in Vertex AI. | | [SagemakerOrchestrator](sagemaker.md) | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | | [AzureMLOrchestrator](azureml.md) | `azureml` | `azure` | Runs pipelines in AzureML. | | [TektonOrchestrator](tekton.md) | `tekton` | `tekton` | Runs pipelines using Tekton. | | [AirflowOrchestrator](airflow.md) | `airflow` | `airflow` | Runs pipelines using Airflow. | | [SkypilotAWSOrchestrator](skypilot-vm.md) | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | | [SkypilotGCPOrchestrator](skypilot-vm.md) | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | | [SkypilotAzureOrchestrator](skypilot-vm.md) | `vm_azure` | `skypilot[azure]` | Runs pipelines in Azure VMs using SkyPilot. | | [HyperAIOrchestrator](hyperai.md) | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | | [Custom Implementation](custom.md) | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell zenml orchestrator flavor list ``` ## Usage You do not need to interact directly with the orchestrator in your code. Simply ensure the orchestrator is part of your active ZenML stack and execute your pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Inspecting Runs To get the URL for the orchestrator UI of a specific pipeline run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Specifying Resources Specify hardware requirements for pipeline steps as needed. For unsupported orchestrators, refer to [step operators](../step-operators/step-operators.md). ================================================== === File: docs/book/component-guide/orchestrators/databricks.md === ### Databricks Orchestrator Overview The Databricks orchestrator, part of the ZenML integration, allows users to run ML pipelines on Databricks, leveraging its distributed computing capabilities. It is suitable for users already utilizing Databricks for data and ML workloads and seeking a managed solution that integrates with Databricks services. ### Prerequisites - An active Databricks workspace (AWS, Azure, GCP). - A Databricks account or service account with permissions to create and run jobs. ### How It Works 1. **Wheel Packages**: ZenML creates a Python wheel package containing the necessary code and dependencies for the pipeline. 2. **Job Definition**: ZenML uses the Databricks SDK to create a job definition that specifies pipeline steps and cluster settings (Spark version, number of workers, etc.). 3. **Execution**: The job retrieves the wheel package and executes the pipeline, ensuring steps run in the correct order. Logs and job status are retrieved post-execution. ### Usage Steps 1. **Install Integration**: ```shell zenml integration install databricks ``` 2. **Register Orchestrator**: ```shell zenml orchestrator register databricks_orchestrator --flavor=databricks --host="https://xxxxx.x.azuredatabricks.net" --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` 3. **Add to Stack**: ```shell zenml stack register databricks_stack -o databricks_orchestrator ... --set ``` 4. **Run Pipeline**: ```shell python run.py ``` ### Databricks UI Access pipeline run details and logs via the Databricks UI. Retrieve the UI URL with: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Scheduling Pipelines Use Databricks' native scheduling capability: ```python from zenml.config.schedule import Schedule pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` **Note**: Only `cron_expression` is supported, and Java Timezone IDs must be used. ### Additional Configuration Customize the orchestrator with `DatabricksOrchestratorSettings`: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-scala2.12", num_workers="3", node_type_id="Standard_D4s_v5", autoscale=(2, 3), schedule_timezone="America/Los_Angeles" ) ``` Apply settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": databricks_settings}) def my_pipeline(): ... ``` ### GPU Support To enable GPU support, adjust `spark_version` and `node_type_id`: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", node_type_id="Standard_NC24ads_A100_v4", autoscale=(1, 2), ) ``` **CUDA Configuration**: Follow specific instructions to enable CUDA for GPU acceleration. For further details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings) and [configuration documentation](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). ================================================== === File: docs/book/component-guide/orchestrators/skypilot-vm.md === # SkyPilot VM Orchestrator Documentation Summary ## Overview The SkyPilot VM Orchestrator integrates with ZenML to provision and manage virtual machines (VMs) across supported cloud providers via the SkyPilot framework. It simplifies running machine learning workloads in the cloud, offering cost savings and high GPU availability without the complexities of managing cloud infrastructure. **Note:** This component is intended for remote ZenML deployments only. ## Use Cases Use the SkyPilot VM Orchestrator if you: - Want to leverage spot VMs for cost savings. - Require high GPU availability across multiple zones/regions. - Prefer not to maintain Kubernetes or pay for managed solutions. ## Functionality - **Provisioning**: Automatically launches VMs for pipelines, supporting on-demand and managed spot VMs. - **Optimization**: Selects the cheapest VM/zone/region for workloads. - **Autostop**: Cleans up idle clusters to prevent unnecessary costs. ## Deployment Requirements To deploy the SkyPilot VM Orchestrator: - Ensure you have permissions to provision VMs on your chosen cloud provider. - Configure the orchestrator using service connectors. **Supported Cloud Platforms**: AWS, GCP, Azure. ## Installation Install the SkyPilot integration for your cloud provider: **AWS:** ```shell pip install "zenml[connectors-aws]" zenml integration install aws skypilot_aws ``` **GCP:** ```shell pip install "zenml[connectors-gcp]" zenml integration install gcp skypilot_gcp ``` **Azure:** ```shell pip install "zenml[connectors-azure]" zenml integration install azure skypilot_azure ``` ## Configuration ### AWS Example 1. Register AWS Service Connector: ```shell zenml service-connector register aws-skypilot-vm --type aws --region=us-east-1 --auto-configure ``` 2. Register the orchestrator: ```shell zenml orchestrator register --flavor vm_aws zenml orchestrator connect --connector aws-skypilot-vm ``` ### GCP Example 1. Register GCP Service Connector: ```shell zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure ``` 2. Register the orchestrator: ```shell zenml orchestrator register --flavor vm_gcp zenml orchestrator connect --connector gcp-skypilot-vm ``` ### Azure Example 1. Register Azure Service Connector: ```shell zenml service-connector register azure-skypilot-vm -t azure --auth-method access-token --auto-configure ``` 2. Register the orchestrator: ```shell zenml orchestrator register --flavor vm_azure zenml orchestrator connect --connector azure-skypilot-vm ``` ### Lambda Labs Example 1. Install integration: ```shell zenml integration install skypilot_lambda ``` 2. Register the orchestrator with API key: ```shell zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} ``` ### Kubernetes Example 1. Install integration: ```shell zenml integration install skypilot_kubernetes ``` 2. Register the orchestrator: ```shell zenml orchestrator register --flavor sky_kubernetes ``` ## Additional Configuration Configure settings such as `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, `disk_size`, and `idle_minutes_to_autostop`. ### Example Configuration for AWS: ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) def my_pipeline(): # Pipeline implementation pass ``` ## Step-Specific Resources You can configure resources for each step of your pipeline individually. If no specific settings are provided, the orchestrator defaults to the general settings. ### Disable Step-Based Settings: ```shell zenml orchestrator update --disable_step_based_settings=True ``` ### Example for Step-Specific Settings: ```python @step(settings={"orchestrator": high_resource_settings}) def my_resource_intensive_step(): # Step implementation pass ``` This orchestrator allows fine-grained control over resource allocation, optimizing for performance and cost. For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-skypilot/#zenml.integrations.skypilot.flavors.skypilot_orchestrator_base_vm_flavor.SkypilotBaseOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/azureml.md === # AzureML Orchestrator Summary ## Overview AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle, from data preparation to monitoring. ## When to Use AzureML Use the AzureML orchestrator if: - You are already using Azure. - You need a production-grade orchestrator. - You want a UI to track pipeline runs. - You prefer a managed solution for running pipelines. ## Implementation The ZenML AzureML orchestrator utilizes the AzureML Python SDK v2 to create AzureML `CommandComponent` for each ZenML step, assembling them into a pipeline. ## Deployment Requirements 1. Deploy ZenML to the cloud. 2. Ensure ZenML is connected to the remote server. 3. Install the ZenML `azure` integration: ```shell zenml integration install azure ``` 4. Install Docker or set up a remote image builder. 5. Set up a remote artifact store and container registry. 6. Create an Azure resource group with an AzureML workspace. ### Authentication Methods 1. **Default Authentication**: Simplifies authentication for local development and Azure hosting. 2. **Service Principal Authentication (recommended)**: Connects cloud components securely. Requires creating a service principal and registering a ZenML Azure Service Connector: ```bash zenml service-connector register --type azure -i zenml orchestrator connect -c ``` ## Docker ZenML builds a Docker image for each pipeline run at `/zenml:`, containing your code. ## AzureML UI AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. Double-click steps to view configurations and logs. ## Settings The `AzureMLOrchestratorSettings` class configures compute resources for pipeline execution. It supports three modes: ### 1. Serverless Compute (Default) ```python from zenml.integrations.azure.flavors import AzureMLOrchestratorSettings azureml_settings = AzureMLOrchestratorSettings(mode="serverless") ``` ### 2. Compute Instance ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-instance", compute_name="my-gpu-instance", size="Standard_NC6s_v3", idle_time_before_shutdown_minutes=20, ) ``` ### 3. Compute Cluster ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-cluster", compute_name="my-gpu-cluster", size="Standard_NC6s_v3", tier="Dedicated", min_instances=2, max_instances=10, idle_time_before_scaledown_down=60, ) ``` ## Scheduling Pipelines AzureML orchestrator supports scheduling pipelines using cron expressions or intervals: ```python from zenml.config.schedule import Schedule pipeline.run(schedule=Schedule(cron_expression="*/5 * * * *")) ``` Note: Users must manage the lifecycle of schedules via the Azure UI. For more details on compute sizes, refer to the [AzureML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes). ================================================== === File: docs/book/component-guide/orchestrators/tekton.md === # Tekton Orchestrator Documentation Summary ## Overview Tekton is an open-source framework for CI/CD systems that enables developers to build, test, and deploy applications across various environments. The Tekton orchestrator in ZenML is designed for remote deployments and is not recommended for local setups. ## When to Use Tekton Use the Tekton orchestrator if: - You need a production-grade orchestrator. - You require a UI to track pipeline runs. - You are comfortable with Kubernetes setup and maintenance. - You can deploy and maintain Tekton Pipelines. ## Deployment Steps 1. **Set Up Kubernetes Cluster**: Ensure you have a remote ZenML server and a Kubernetes cluster (EKS, GKE, or AKS) set up. 2. **Install `kubectl`**: Download and configure `kubectl` for your cluster. 3. **Install Tekton Pipelines**: Follow the installation guide for Tekton Pipelines. **Example Commands**: - For AWS EKS: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` - For GCP GKE: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` - For Azure AKS: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` **Note**: Ensure Tekton Pipelines version is >=0.38.3. ## Usage Requirements To use the Tekton orchestrator: - Install the ZenML `tekton` integration: ```shell zenml integration install tekton -y ``` - Ensure Docker is installed and running. - Have a remote artifact store and container registry as part of your stack. - Optionally, configure `kubectl` for remote access. ### Registering the Orchestrator 1. **With Service Connector**: ```shell zenml orchestrator register --flavor tekton zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Without Service Connector**: ```shell zenml orchestrator register --flavor=tekton --kubernetes_context= zenml stack register -o ... --set ``` ## Running a Pipeline To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` ## Tekton UI Access the Tekton UI for detailed pipeline run information: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` ## Additional Configuration You can customize the Tekton orchestrator using `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings tekton_settings = TektonOrchestratorSettings( pod_settings={ "affinity": {...}, "tolerations": [...] } ) ``` Specify resource settings for hardware requirements: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` Apply settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_pipeline(): ... @step(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_step(): ... ``` ## GPU Configuration For running steps on GPU, follow the instructions to enable CUDA for acceleration. For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/#zenml.integrations.tekton.orchestrators.tekton_orchestrator.TektonOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/kubeflow.md === ### Kubeflow Orchestrator Overview The Kubeflow orchestrator is a ZenML integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to manage and run pipelines. It is designed for remote ZenML deployments and is not suitable for local setups. ### When to Use Use the Kubeflow orchestrator if you need: - A production-grade orchestrator with a UI for tracking pipeline runs. - Familiarity with Kubernetes or willingness to set up a Kubernetes cluster. - Capability to deploy and maintain Kubeflow Pipelines. ### Deployment Steps To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. The setup varies by cloud provider: #### AWS 1. Set up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). 2. Configure AWS CLI and `kubectl`: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. Optionally, set up an AWS Service Connector for secure access. #### GCP 1. Set up a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/quickstart). 2. Configure Google Cloud CLI and `kubectl`: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. Optionally, set up a GCP Service Connector. #### Azure 1. Set up an [AKS cluster](https://azure.microsoft.com/en-in/services/kubernetes-service/#documentation). 2. Configure Azure CLI and `kubectl`: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. Adjust the workflow controller's `containerRuntimeExecutor` to `k8sapi` if using containerd. #### Other Kubernetes 1. Set up a Kubernetes cluster. 2. Install `kubectl` and configure it. 3. Install Kubeflow Pipelines. 4. Optionally, set up a Kubernetes Service Connector. ### Usage Requirements To use the Kubeflow orchestrator: - A Kubernetes cluster with Kubeflow Pipelines installed. - A remote ZenML server. - ZenML `kubeflow` integration installed: ```shell zenml integration install kubeflow ``` - Docker installed (unless using a remote Image Builder). - `kubectl` installed (optional). ### Registering the Orchestrator 1. **With Service Connector**: ```shell zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator register --flavor kubeflow --connector --resource-id zenml stack register -o -a -c ``` 2. **Without Service Connector**: ```shell zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack register -o -a -c ``` ### Running a Pipeline Run a ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Accessing Kubeflow UI Retrieve the Kubeflow UI URL for pipeline runs: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` ### Additional Configuration You can configure the Kubeflow orchestrator with `KubeflowOrchestratorSettings` for: - `client_args`: KFP client arguments. - `user_namespace`: Namespace for experiments and runs. - `pod_settings`: Node selectors, affinity, and tolerations. Example configuration: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={"affinity": {...}, "tolerations": [...]} ) ``` ### Multi-Tenancy Note For multi-tenant deployments, include the `kubeflow_hostname` parameter when registering: ```shell zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Use the following for authentication: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="{{kubeflow_secret.username}}", client_password="{{kubeflow_secret.password}}", user_namespace="namespace_name" ) ``` ### Using Secrets Create secrets for sensitive information: ```shell zenml secret create kubeflow_secret --username=admin --password=abc123 ``` ### Conclusion For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/vertex.md === # Google Cloud Vertex AI Orchestrator Documentation Summary ## Overview Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) for running production-ready, repeatable pipelines with minimal setup. This orchestrator is intended for remote ZenML deployments only. ## When to Use Use the Vertex orchestrator if: - You are using GCP. - You need a production-grade orchestrator with UI tracking. - You prefer a managed, serverless solution. ## Deployment Steps 1. **Deploy ZenML to the Cloud**: Recommended to deploy in the same GCP project as Vertex infrastructure. 2. **Enable Vertex APIs**: Ensure relevant APIs are enabled in your GCP project. ## Prerequisites - Install ZenML GCP integration: ```shell zenml integration install gcp ``` - Docker installed and running. - Remote artifact store and container registry configured. - GCP credentials with necessary permissions. ### GCP Credentials and Permissions You need a GCP user account or service accounts with proper permissions. Authentication options include: - Using `gcloud` CLI. - Service account key file. - Recommended: GCP Service Connector with linked credentials. ### Vertex AI Pipeline Components 1. **ZenML Client Environment**: Runs ZenML code, requires permissions to create jobs in Vertex Pipelines. 2. **Vertex AI Pipeline Environment**: Runs pipeline steps, requires a workload service account with permissions to execute pipelines. ### Configuration Use-Cases 1. **Local `gcloud` CLI**: ```shell zenml orchestrator register \ --flavor=vertex \ --project= \ --location= \ --synchronous=true ``` 2. **GCP Service Connector with Single Service Account**: ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic zenml orchestrator register \ --flavor=vertex \ --location= \ --synchronous=true \ --workload_service_account=@.iam.gserviceaccount.com zenml orchestrator connect --connector ``` 3. **GCP Service Connector with Different Service Accounts**: Involves multiple service accounts for least privilege access. ### Configuring the Stack To register and activate a stack with the new orchestrator: ```shell zenml stack register -o ... --set ``` ### Running Pipelines Run any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Vertex UI Access pipeline run details and logs via the Vertex UI. Retrieve the URL in Python: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Scheduling Pipelines Schedule pipelines using: ```python from zenml.config.schedule import Schedule pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` **Note**: Only `cron_expression`, `start_time`, and `end_time` are supported. ### Additional Configuration Configure labels and resource settings: ```python from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings vertex_settings = VertexOrchestratorSettings(labels={"key": "value"}) resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` For GPU usage: ```python vertex_settings = VertexOrchestratorSettings( pod_settings={"node_selectors": {"cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"}} ) resource_settings = ResourceSettings(gpu_count=1) ``` ### Enabling CUDA for GPU Follow specific instructions to enable CUDA for GPU acceleration. For further details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.flavors.vertex_orchestrator_flavor.VertexOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === # Custom Orchestrator Development in ZenML ## Overview This documentation provides guidance on developing a custom orchestrator in ZenML, an orchestration framework. Familiarity with ZenML's component flavor concepts is recommended before proceeding. ## Base Implementation ZenML allows for orchestration with various tools through the `BaseOrchestrator`, which abstracts ZenML-specific details and provides a simplified interface. ### Key Classes - **BaseOrchestratorConfig**: Base class for all orchestrator configurations. - **BaseOrchestrator**: Abstract class requiring implementation of: - `prepare_or_run_pipeline(deployment, stack, environment)`: Prepares and runs the pipeline. - `get_orchestrator_run_id()`: Returns a unique run ID for the active orchestrator run. - **BaseOrchestratorFlavor**: Base class for orchestrator flavors, requiring: - `name`: Flavor name. - `type`: Returns `StackComponentType.ORCHESTRATOR`. - `config_class`: Returns `BaseOrchestratorConfig`. - `implementation_class`: Implementation class for the flavor. ## Creating a Custom Orchestrator 1. **Inherit from `BaseOrchestrator`** and implement the required methods. 2. **Create a configuration class** inheriting from `BaseOrchestratorConfig` for custom parameters. 3. **Inherit from `BaseOrchestratorFlavor`**, providing a name for the flavor. ### Registering the Flavor Use the CLI to register your orchestrator flavor: ```shell zenml orchestrator flavor register ``` Example: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` ### Listing Available Flavors To see registered flavors: ```shell zenml orchestrator flavor list ``` ## Implementation Guide 1. **Create your orchestrator class**: Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. 2. **Implement `prepare_or_run_pipeline(...)`**: Convert the pipeline for your orchestration tool and run it, ensuring correct execution order and environment variables. 3. **Implement `get_orchestrator_run_id()`**: Return a unique ID for each pipeline run. ### Optional Features - **Scheduling**: Handle `deployment.schedule` if supported. - **Resource Specification**: Manage CPU, GPU, or memory settings via `step.config.resource_settings`. ### Code Sample ```python from zenml.models import PipelineDeploymentResponseModel from zenml.orchestrators import ContainerizedOrchestrator from zenml.stack import Stack class MyOrchestrator(ContainerizedOrchestrator): def get_orchestrator_run_id(self) -> str: ... def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> None: if deployment.schedule: ... for step_name, step in deployment.step_configurations.items(): image = self.get_image(deployment, step_name) command = StepEntrypointConfiguration.get_entrypoint_command() arguments = StepEntrypointConfiguration.get_entrypoint_arguments(step_name, deployment.id) ... ``` ## Enabling GPU Support To run steps on a GPU, follow the specific instructions to enable CUDA for GPU acceleration. For more details and examples, refer to the full documentation and source code on GitHub. ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === # ZenML Debugging Guide This guide provides best practices for debugging common issues in ZenML, including when to seek help and how to effectively communicate your problem. ### When to Get Help Before asking for help, check the following: - Search Slack, GitHub issues, and ZenML documentation. - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). If you still need assistance, post your question on [Slack](https://zenml.io/slack). ### How to Post on Slack Provide the following information for clarity: 1. **System Information**: Run the command below and share the output: ```shell zenml info -a -s ``` For specific package issues, use: ```shell zenml info -p ``` 2. **What Happened**: Describe your goal, expectations, and actual results. 3. **Reproducing the Error**: Provide step-by-step instructions or a video. 4. **Relevant Log Output**: Attach relevant logs and error tracebacks. Include outputs from: ```shell zenml status zenml stack describe ``` For orchestrator logs, include those from the relevant pod. ### Additional Logs If default logs are insufficient, increase verbosity by setting the environment variable: ```shell export ZENML_LOGGING_VERBOSITY=DEBUG ``` Refer to documentation for setting environment variables on different OS. ### Client and Server Logs To view server logs, run: ```shell zenml logs ``` ### Common Errors 1. **Error initializing rest store**: ```bash RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237': ... ``` Solution: Re-run `zenml login --local` after a machine restart. 2. **Column 'step_configuration' cannot be null**: ```bash sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") ``` Solution: Ensure step configurations are within the character limit. 3. **'NoneType' object has no attribute 'name'**: ```shell AttributeError: 'NoneType' object has no attribute 'name' ``` Solution: Register the required stack components, e.g.: ```shell zenml experiment-tracker register mlflow_tracker --flavor=mlflow zenml stack update -e mlflow_tracker ``` This guide aims to streamline the debugging process and enhance communication for effective problem resolution in ZenML. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === # ZenML Secrets Management Documentation Summary ## Overview of ZenML Secrets ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. ## Creating Secrets ### CLI Method To create a secret named `` with key-value pairs: ```shell zenml secret create --= --= ``` Alternatively, use JSON or YAML format: ```shell zenml secret create --values='{"key1":"value1","key2":"value2"}' ``` For interactive creation: ```shell zenml secret create -i ``` For large values or special characters, read from a file: ```bash zenml secret create --key=@path/to/file.txt ``` ### Python SDK Method Using the ZenML client API: ```python from zenml.client import Client client = Client() client.create_secret(name="my_secret", values={"username": "admin", "password": "abc123"}) ``` ## Secret Scope Secrets can be scoped to a user, ensuring only the active user can access them: ```shell zenml secret create --scope user --= ``` ## Accessing Secrets ### Reference in Stack Components Use the syntax `{{.}}` to reference secrets in stack component attributes: ```shell zenml secret create mlflow_secret --username=admin --password=abc123 zenml experiment-tracker register mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` ### Validation of Secrets ZenML validates the existence of secrets and keys before running a pipeline. Control validation level with `ZENML_SECRET_VALIDATION_LEVEL`: - `NONE`: No validation. - `SECRET_EXISTS`: Checks if the secret exists. - `SECRET_AND_KEY_EXISTS`: (default) Checks both secret and key existence. ### Fetching Secret Values in Steps Access secrets in steps using the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: secret = Client().get_secret("") authenticate_to_some_api(username=secret.secret_values["username"], password=secret.secret_values["password"]) ``` ## Additional Resources For more details, refer to the full CLI guide [here](https://sdkdocs.zenml.io/latest/cli/#zenml.cli--secrets-management) and the Client API reference [here](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/). ================================================== === File: docs/book/how-to/project-setup-and-management/README.md === # Project Setup and Management This section outlines the essential steps for setting up and managing ZenML projects. ## Key Steps for Project Setup 1. **Installation**: Install ZenML using pip: ```bash pip install zenml ``` 2. **Initialize a Project**: Create a new ZenML project with: ```bash zenml init ``` 3. **Configure a Stack**: Set up a stack that includes components like orchestrators, artifact stores, and metadata stores. Use: ```bash zenml stack register --orchestrator --artifact-store --metadata-store ``` 4. **Create Pipelines**: Define pipelines using decorators and functions. Example: ```python @pipeline def my_pipeline(): step1 = step1_op() step2 = step2_op(step1) ``` 5. **Run Pipelines**: Execute pipelines with: ```bash zenml pipeline run ``` ## Project Management - **Version Control**: Use Git for version control to manage changes in your project. - **Environment Management**: Utilize virtual environments to isolate dependencies. - **Documentation**: Maintain clear documentation for project structure and components. By following these steps, users can effectively set up and manage ZenML projects, ensuring a streamlined workflow for machine learning operations. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/access-management.md === # Access Management and Roles in ZenML This guide outlines user roles and access management in ZenML, essential for project security and efficiency. ## Typical Roles in an ML Project Common roles include: - **Data Scientists**: Develop and run pipelines. - **MLOps Platform Engineers**: Manage infrastructure and stack components. - **Project Owners**: Oversee ZenML deployment and user access. Roles may vary, but responsibilities can be adapted to your project. ## Service Connectors Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors, while Data Scientists can use them to create stack components without accessing sensitive credentials. ### Example Permissions - **Data Scientist Role**: Can create stack components and run pipelines but cannot create, update, or delete connectors or read secret values. - **MLOps Platform Engineer Role**: Has permissions to create, update, delete connectors, and read secret values. RBAC features are available in ZenML Pro. ## Upgrading the ZenML Server Project Owners decide on server upgrades after consulting teams. MLOps Platform Engineers typically handle the upgrade process, ensuring data backup and minimal service disruption. ## Migrating and Maintaining Pipelines Data Scientists own pipeline code but must collaborate with Platform Engineers to test compatibility with new ZenML versions. They should review release notes and migration guides during upgrades. ## Best Practices for Access Management - **Regular Audits**: Review user access and permissions periodically. - **Role-Based Access Control (RBAC)**: Streamline permission management. - **Least Privilege**: Assign minimal necessary permissions. - **Documentation**: Keep clear records of roles and access policies. RBAC and permission assignment are exclusive to ZenML Pro users. Following these practices ensures a secure and collaborative ZenML environment. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md === # Sharing Code and Libraries within Teams ## Overview This guide outlines how teams can share code libraries and components using ZenML to enhance collaboration, standardization, and robustness across projects. ## What Can Be Shared ### Custom Components 1. **Custom Flavors**: Integrations not built-in with ZenML. - Create in a shared repository. - Implement as per ZenML documentation. - Register using ZenML CLI: ```bash zenml artifact-store flavor register ``` 2. **Custom Steps**: Created in a separate repository and referenced like Python modules. 3. **Custom Materializers**: Common components for sharing. - Create in a shared repository. - Implement as per ZenML documentation. - Import and use in projects. ## How to Distribute Shared Components ### Shared Private Wheels - **Benefits**: Easy installation, version and dependency management, privacy. - **Setup**: 1. Create a private PyPI server (e.g., AWS CodeArtifact). 2. Build code into wheel format. 3. Upload to the private server. 4. Configure pip to use the private server. 5. Install packages using pip. ### Using Shared Libraries with `DockerSettings` - Specify shared libraries in the `Dockerfile` at runtime. - **Installation Methods**: - List of requirements: ```python from zenml.config import DockerSettings docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} ) ``` - Requirements file: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") ``` - Example `requirements.txt`: ``` --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ my-simple-package==0.1.0 ``` ## Best Practices - **Version Control**: Use Git for shared code repositories. - **Access Controls**: Implement security measures for private servers. - **Documentation**: Maintain clear and comprehensive documentation. - **Regular Updates**: Keep libraries updated and communicate changes. - **Continuous Integration**: Set up CI for quality assurance of shared components. By following these guidelines, teams can effectively share code and libraries within the ZenML framework, enhancing collaboration and accelerating development. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md === # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML This guide provides an overview of how to effectively organize stacks, pipelines, models, and artifacts in ZenML, which are essential for MLOps. ## Key Concepts - **Stacks**: Configuration of tools and infrastructure for running pipelines. Composed of components like orchestrators, container registries, and artifact stores. They enable seamless transitions between environments (local, staging, production) and can be reused across multiple pipelines to reduce configuration overhead and promote reproducibility. - **Pipelines**: Series of steps representing tasks in the ML workflow, such as data preparation and model training. It’s best practice to separate pipelines by task (e.g., training vs. inference) for modularity and easier management. - **Models**: Collections of pipelines, artifacts, and metadata tied to a specific project. Models facilitate data transfer between pipelines and can be managed through the Model Control Plane, which allows for versioning and stage management. - **Artifacts**: Outputs of pipeline steps that are tracked and reused across pipelines. Proper naming and logging of metadata enhance traceability and organization. ## Organizing Your Workflow ### Pipelines - Separate pipelines for different tasks to run them independently and manage complexity. - Allows multiple team members to work on different pipelines without interference. ### Models - Use a Model to connect related pipelines and facilitate data transfer. - The Model Control Plane helps manage model versions and stages. ### Artifacts - Track and reuse artifacts across pipelines, ensuring clear history and traceability. - Log metadata for better visibility in the Model Control Plane. ## Example Workflow 1. Team members create separate pipelines for feature engineering, training, and inference. 2. They use a shared stack for local testing, allowing quick iterations. 3. Ensure preprocessing steps are consistent across pipelines. 4. Use a ZenML Model to link artifacts from training to inference. 5. Manage model versions with the Model Control Plane to promote the best performing model to production. ## Rules of Thumb - **Models**: One Model per ML use-case; group related resources. - **Stacks**: Separate stacks for different environments; share production stacks for consistency. - **Naming**: Consistent naming conventions; use tags for organization; document configurations and dependencies. Following these guidelines supports a clean and scalable MLOps workflow as projects grow. For further details, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === It seems that the provided text is incomplete or missing. Please provide the full documentation text that you would like summarized, and I will be happy to assist you! ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md === ### Creating Your Own ZenML Template Creating a ZenML template helps standardize and share ML workflows. ZenML utilizes [Copier](https://copier.readthedocs.io/en/stable/) for template management. Here’s a concise guide: 1. **Create a Repository**: Set up a new repository to store your template's code and configuration files. 2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a base to define your ML steps and pipelines. 3. **Create `copier.yml`**: This file specifies template parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. 4. **Test Your Template**: Use the following command to generate a new project from your template: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` 5. **Use with ZenML**: Initialize your ZenML project with your template: ```bash zenml init --template https://github.com/your-username/your-template.git ``` To specify a version, use: ```bash zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` ### Additional Notes - Keep your template updated with best practices. - For practical examples, install the `e2e_batch` template: ```bash mkdir e2e_batch cd e2e_batch zenml init --template e2e_batch --template-with-defaults ``` This guide enables you to quickly set up new ML projects using your own ZenML templates. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md === # ZenML Project Templates Overview **Warning:** This documentation refers to an older version of ZenML. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). ## Purpose of Project Templates ZenML project templates provide a quick way to understand the ZenML framework and start building ML pipelines. They include a collection of steps, pipelines, and a simple CLI. ## Available Project Templates | Project Template [Short name] | Tags | Description | |-------------------------------|------|-------------| | [Starter template](https://github.com/zenml-io/template-starter) [starter] | basic, scikit-learn | Basic ML components for starting with ZenML, including parameterized steps, a model training pipeline, and a simple CLI using scikit-learn. | | [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [e2e_batch] | etl, hp-tuning, model-promotion, drift-detection, batch-prediction, scikit-learn | A comprehensive template with pipelines for data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion, drift detection, and batch inference. | | [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [nlp] | nlp, hp-tuning, model-promotion, training, pytorch, gradio, huggingface | An NLP training pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, with local testing using Gradio. | **Note:** ZenML is seeking collaboration for design partnerships. If you have a project to share, join our [Slack](https://zenml.io/slack/). ## Using a Project Template 1. **Install ZenML with templates:** ```bash pip install zenml[templates] ``` 2. **Generate a project from a template:** ```bash zenml init --template # Example: zenml init --template e2e_batch ``` 3. **Use default values:** ```bash zenml init --template --template-with-defaults # Example: zenml init --template e2e_batch --template-with-defaults ``` **Warning:** These templates differ from 'Run Templates' used for triggering pipelines. More information on Run Templates can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/set-up-repository.md === # ZenML Repository Structure and Best Practices ## Recommended Project Structure The following is a suggested structure for a ZenML project: ```markdown . ├── .dockerignore ├── Dockerfile ├── steps │ ├── loader_step │ │ ├── loader_step.py │ └── training_step ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py ├── notebooks │ └── *.ipynb ├── requirements.txt ├── .zen └── run.py ``` ### Key Points: - **Project Templates**: All ZenML project templates follow this structure. - **Steps and Pipelines**: Organize steps and pipelines in separate folders; simpler projects can keep steps at the top level. - **Code Repository**: Registering your repository can enhance version tracking and speed up Docker image builds. ## Steps - Store each step in separate Python files to manage utilities and dependencies effectively. - Use the `logging` module for logging within steps, which will be recorded in the ZenML dashboard. ```python from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader(): logger.info("My logs") ``` ## Pipelines - Keep pipelines in separate Python files. - Avoid naming pipelines or instances "pipeline" to prevent conflicts with the imported `pipeline` decorator. - Unique pipeline names are crucial for maintaining clear run histories. ## .dockerignore - Use `.dockerignore` to exclude unnecessary files from Docker images, improving build speed and reducing image size. ## Dockerfile (Optional) - ZenML uses the official ZenML Docker image by default. You can customize this with your own `Dockerfile`. ## Notebooks - Organize all Jupyter notebooks in a dedicated folder. ## .zen - Run `zenml init` at the project root to define the project scope and establish the source's root, which is important for import paths and configurations. ## run.py - Place pipeline runners in the project root to ensure correct import resolution. If no `.zen` file exists, this file implicitly defines the source's root. This structure and these practices help maintain organization and efficiency in ZenML projects. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md === ### Summary of ZenML Code Repository Documentation **Overview**: ZenML allows tracking code versions and optimizing Docker builds by connecting to code repositories like GitHub and GitLab. #### Connecting a Git Repository - A code repository in ZenML is a remote location for your code, facilitating version tracking for pipeline runs and speeding up Docker image builds. - To register a code repository, install the relevant ZenML integration: ```shell zenml integration install ``` - Register using the CLI: ```shell zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] ``` #### Available Implementations 1. **GitHub**: - Install GitHub integration: ```shell zenml integration install github ``` - Register a GitHub repository: ```shell zenml code-repository register --type=github \ --url= --owner= --repository= \ --token= ``` - Use secrets management for the GitHub token: ```shell zenml secret create github_secret --pa_token= zenml code-repository register ... --token={{github_secret.pa_token}} ``` 2. **GitLab**: - Install GitLab integration: ```shell zenml integration install gitlab ``` - Register a GitLab repository: ```shell zenml code-repository register --type=gitlab \ --url= --group= --project= \ --token= ``` - Use secrets management for the GitLab token: ```shell zenml secret create gitlab_secret --pa_token= zenml code-repository register ... --token={{gitlab_secret.pa_token}} ``` #### Developing a Custom Code Repository - For other platforms, subclass `zenml.code_repositories.BaseCodeRepository` and implement required methods: ```python class BaseCodeRepository(ABC): @abstractmethod def login(self) -> None: """Logs into the code repository.""" @abstractmethod def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: """Downloads files from the code repository.""" @abstractmethod def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: """Gets a local repository context from a path.""" ``` - Register the custom repository: ```shell zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] ``` This documentation provides essential steps and commands for integrating GitHub and GitLab with ZenML, as well as guidance for creating custom code repositories. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/README.md === # Setting Up a Well-Architected ZenML Project This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration. ## Importance of a Well-Architected Project A well-architected ZenML project is vital for effective machine learning operations (MLOps), providing a foundation for efficient model development, deployment, and maintenance. ## Key Components ### Repository Structure - Organize folders for pipelines, steps, and configurations. - Maintain clear separation of concerns and consistent naming conventions. - Refer to the [Set up repository guide](./set-up-repository.md) for details. ### Version Control and Collaboration - Integrate with Git for tracking changes and team collaboration. - Enables faster pipeline builds by reusing images and code from the repository. - Learn more in the [Set up a repository guide](./set-up-repository.md). ### Stacks, Pipelines, Models, and Artifacts - **Stacks:** Define infrastructure and tool configurations. - **Models:** Represent ML models and metadata. - **Pipelines:** Encapsulate ML workflows. - **Artifacts:** Track data and model outputs. - See [Organizing Stacks, Pipelines, Models, and Artifacts guide](../collaborate-with-team/stacks-pipelines-models.md). ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). - Set up [service connectors](../../infrastructure-deployment/auth-management/README.md) for authorization. - Use [Teams in ZenML Pro](../../../getting-started/zenml-pro/teams.md) for role assignment. - Explore strategies in the [Access Management and Roles guide](../collaborate-with-team/access-management.md). ### Shared Components and Libraries - Promote code reuse with custom flavors, steps, and shared libraries. - Handle authentication for specific libraries. - More details in the [Shared Libraries and Logic for Teams guide](../collaborate-with-team/shared-components-for-teams.md). ### Project Templates - Use pre-made or custom templates for consistency in project setup. - Learn about templates in the [Project Templates guide](../collaborate-with-team/project-templates/README.md). ### Migration and Maintenance - Strategies for migrating legacy code and upgrading ZenML servers. - Best practices are detailed in the [Migration and Maintenance guide](../../advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md#upgrading-your-code). ## Getting Started Explore the guides in this section to begin building your ZenML project. Regularly review and refine your project structure to adapt to your team's needs, ensuring a robust MLOps environment. ================================================== === File: docs/book/how-to/model-management-metrics/README.md === # Model Management and Metrics in ZenML This section outlines the processes for managing models and tracking metrics within ZenML. ## Key Components: 1. **Model Management**: - ZenML provides tools for versioning, storing, and deploying machine learning models. - Models can be registered and organized in a centralized repository. 2. **Metrics Tracking**: - Metrics can be logged and monitored throughout the model lifecycle. - ZenML supports integration with various tracking tools for visualization and analysis. 3. **Version Control**: - Each model version can be tagged and retrieved, ensuring reproducibility. - Users can compare different model versions based on performance metrics. 4. **Deployment**: - Models can be deployed to various environments (e.g., cloud, on-premises). - Deployment configurations can be managed through ZenML's interface. 5. **Integration**: - ZenML integrates with popular ML frameworks and tools for seamless workflow management. - Users can leverage existing libraries for enhanced functionality. By utilizing these features, users can effectively manage their machine learning models and ensure consistent tracking of performance metrics throughout the development lifecycle. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-an-artifact.md === # Summary: Attaching Metadata to Artifacts in ZenML In ZenML, metadata enhances artifacts by providing context such as size, structure, or performance metrics, which can be viewed in the ZenML dashboard for easier inspection and comparison. ## Logging Metadata for Artifacts Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. Metadata can be any JSON-serializable value, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. ### Example: ```python import pandas as pd from zenml import step, log_metadata from zenml.metadata.metadata_types import StorageSize @step def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: processed_dataframe = ... log_metadata( metadata={ "row_count": len(processed_dataframe), "columns": list(processed_dataframe.columns), "storage_size": StorageSize(processed_dataframe.memory_usage().sum()) }, infer_artifact=True, ) return processed_dataframe ``` ## Selecting the Artifact for Metadata Logging 1. **Using `infer_artifact`**: Automatically selects the output artifact of the step. 2. **Name and Version**: Use both to identify a specific artifact version. 3. **Artifact Version ID**: Directly fetches the specified artifact version. ## Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: ```python from zenml.client import Client client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` *Note: The returned value reflects the latest entry for the specified key.* ## Grouping Metadata in the Dashboard You can group metadata into cards by passing a dictionary of dictionaries in the `metadata` parameter. This organizes metadata into logical sections. ### Example: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="version", ) ``` In the ZenML dashboard, `model_metrics` and `data_details` will appear as separate cards. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-run.md === ### Summary: Attaching Metadata to a Run in ZenML In ZenML, metadata can be logged to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Run When logging metadata from within a pipeline step, the metadata key follows the `step_name::metadata_key` format, allowing consistent usage across different steps. **Example: Logging Metadata in a Step** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True) ]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata({ "run_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } }) return classifier ``` #### Manually Logging Metadata Metadata can also be attached to a specific pipeline run using the run ID, which is useful for logging post-execution metrics. **Example: Manual Metadata Logging** ```python from zenml import log_metadata log_metadata( metadata={"post_run_info": {"some_metric": 5.0}}, run_id_name_or_prefix="run_id_name_or_prefix" ) ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client. The latest entry for a specific key will be returned. **Example: Fetching Metadata** ```python from zenml.client import Client client = Client() run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` ### Important Notes - The `log_metadata` function can be called during or after the execution of a pipeline. - The returned value when fetching metadata reflects the latest entry for the specified key. For the latest ZenML documentation, please refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-model.md === # Attaching Metadata to a Model in ZenML ZenML allows logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, and customer-specific details, aiding in model management and performance interpretation. ## Logging Metadata To log metadata, use the `log_metadata` function, which attaches key-value pairs to a model. This can include metrics and JSON-serializable values, such as custom ZenML types (`Uri`, `Path`, `StorageSize`). ### Example Code ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata( metadata={ "evaluation_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } }, infer_model=True, ) return classifier ``` In this example, metadata is associated with the model rather than the classifier artifact, useful for summarizing various pipeline steps. ## Selecting Models with `log_metadata` ZenML provides options for attaching metadata to model versions: 1. **Using `infer_model`**: Infers the model from the step context. 2. **Model Name and Version**: Attaches metadata to a specified model version. 3. **Model Version ID**: Directly attaches metadata to a specific model version. ## Fetching Logged Metadata To retrieve metadata, use the ZenML Client: ### Example Code ```python from zenml.client import Client client = Client() model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"]) ``` When fetching metadata by key, the returned value reflects the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/grouping-metadata.md === ### Grouping Metadata in the Dashboard To group key-value pairs in the ZenML dashboard, pass a dictionary of dictionaries to the `metadata` parameter when logging metadata. This organizes metadata into cards for better visualization. #### Example Code: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="my_artifact_version", ) ``` In the ZenML dashboard, "model_metrics" and "data_details" will appear as separate cards, each displaying their respective key-value pairs. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md === # Tracking Your Metadata in ZenML ZenML provides special metadata types to capture specific information, including `Uri`, `Path`, `DType`, and `StorageSize`. Below is an example of how to use these types: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path log_metadata( metadata={ "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), "preprocessing_script": Path("/scripts/preprocess.py"), "column_types": { "age": DType("int"), "income": DType("float"), "score": DType("int") }, "processed_data_size": StorageSize(2500000) }, ) ``` ### Key Points: - **Uri**: Represents a dataset source URI. - **Path**: Specifies the filesystem path to a script. - **DType**: Describes the data types of specific columns. - **StorageSize**: Indicates the size of processed data in bytes. These types standardize metadata format, ensuring consistent and interpretable logging. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md === ### Fetching Metadata During Pipeline Composition To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. #### Example Code ```python from zenml import get_pipeline_context, pipeline @pipeline( extra={ "complex_parameter": [ ("sklearn.tree", "DecisionTreeClassifier"), ("sklearn.ensemble", "RandomForestClassifier"), ] } ) def my_pipeline(): context = get_pipeline_context() after = [] search_steps_prefix = "hp_tuning_search_" for i, model_search_configuration in enumerate(context.extra["complex_parameter"]): step_name = f"{search_steps_prefix}{i}" cross_validation( model_package=model_search_configuration[0], model_class=model_search_configuration[1], id=step_name ) after.append(step_name) select_best_model(search_steps_prefix=search_steps_prefix, after=after) ``` #### Additional Information For more details on the attributes and methods available in `PipelineContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md === ### Summary: Accessing Meta Information in ZenML Pipelines This documentation outlines how to access metadata in real-time during the execution of a ZenML pipeline using the `StepContext`. #### Fetching Metadata with `StepContext` To retrieve information about the currently running pipeline or step, use the `zenml.get_step_context()` function: ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() pipeline_name = step_context.pipeline.name run_name = step_context.pipeline_run.name step_name = step_context.step_run.name ``` Additionally, you can access the output storage URI and the associated Materializer class for saving outputs: ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() uri = step_context.get_output_artifact_uri() # Output storage URI materializer = step_context.get_output_materializer() # Output Materializer ``` For more details on the attributes and methods available in `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-step.md === ### Summary: Attaching Metadata to a Step in ZenML In ZenML, you can log metadata for a specific step using the `log_metadata` function, which allows you to attach a dictionary of key-value pairs as metadata. This metadata can include any JSON-serializable values, including custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Step When `log_metadata` is called within a step, it automatically attaches the metadata to the currently executing step and its associated pipeline run. This is useful for logging metrics available during execution. **Example:** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata(metadata={"evaluation_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall}}) return classifier ``` #### Manually Logging Metadata After Execution You can log metadata post-execution using identifiers for the pipeline, step, and run. **Example:** ```python from zenml import log_metadata log_metadata(metadata={"additional_info": {"a_number": 3}}, step_name="step_name", run_id_name_or_prefix="run_id_name_or_prefix") # or log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") ``` #### Fetching Logged Metadata To fetch logged metadata, use the ZenML Client: **Example:** ```python from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] print(step.run_metadata["metadata_key"]) ``` **Note:** The fetched value will always reflect the latest entry for the specified key. ### Important Notes - Cached step executions will copy the original step's metadata. - Manually generated metadata after the original step execution will not be included in cached runs. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md === # Tracking and Comparing Metrics and Metadata in ZenML ZenML provides a unified `log_metadata` function to log and manage metrics and metadata across models, artifacts, steps, and runs. ## Logging Metadata ### Basic Use-Case You can log metadata within a step as follows: ```python from zenml import step, log_metadata @step def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` This logs the `accuracy` for the step, its pipeline run, and optionally its model version. ### Real-World Example Here’s a comprehensive example of logging various metadata in a machine learning pipeline: ```python from zenml import step, pipeline, log_metadata @step def process_engine_metrics() -> float: log_metadata({ "engine_temperature": 3650, # Kelvin "fuel_consumption_rate": 245, # kg/s "thrust_efficiency": 0.92, }) return 0.92 @step def analyze_flight_telemetry(efficiency: float) -> None: log_metadata({ "altitude": 220000, # meters "velocity": 7800, # m/s "fuel_remaining": 2150, # kg "mission_success_prob": 0.9985, }) @pipeline def telemetry_pipeline(): efficiency = process_engine_metrics() analyze_flight_telemetry(efficiency) ``` This logged data can be visualized in the ZenML Pro dashboard. ### Visualizing and Comparing Metadata (Pro) Once metadata is logged, you can use the Experiment Comparison tool in ZenML Pro to analyze metrics across runs. Key features include: 1. **Table View**: Compare metadata with change tracking. 2. **Parallel Coordinates Plot**: Visualize relationships between metrics. You can compare up to 20 pipeline runs and any numerical metadata (`float` or `int`). ### Additional Use-Cases The `log_metadata` function supports various entities (model, artifact, step, run) with flexible parameters. For more details, refer to: - Log metadata to a step - Log metadata to a run - Log metadata to an artifact - Log metadata to a model **Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for future implementations. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/promote-a-model.md === # Model Promotion in ZenML ## Stages and Promotion Model versions in ZenML progress through various lifecycle stages, which serve as metadata to indicate their state. The stages include: - **staging**: Ready for production. - **production**: Actively running in production. - **latest**: Represents the most recent version (not promotable). - **archived**: No longer relevant, typically after moving from another stage. ### Promotion Methods Models can be promoted using three methods: #### 1. CLI Use the following command to promote a model version: ```bash zenml model version update iris_logistic_regression --stage=... ``` #### 2. Cloud Dashboard Promotion via the ZenML Pro dashboard is forthcoming. #### 3. Python SDK The most common method for promoting models: ```python from zenml import Model from zenml.enums import ModelStages MODEL_NAME = "iris_logistic_regression" model = Model(name=MODEL_NAME, version="1.2.3") model.set_stage(stage=ModelStages.PRODUCTION) latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST) latest_model.set_stage(stage=ModelStages.STAGING) ``` In a pipeline context, retrieve the model from the step context: ```python from zenml import get_step_context, step, pipeline from zenml.enums import ModelStages @step def promote_to_staging(): model = get_step_context().model model.set_stage(ModelStages.STAGING, force=True) @pipeline(...) def train_and_promote_model(): ... promote_to_staging(after=["train_and_evaluate"]) ``` ## Fetching Model Versions by Stage To load a model version by its stage: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` This allows for precise control over which model version is used in training and evaluation steps. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/linking-model-binaries-data-to-models.md === # Linking Model Binaries/Data to Models in ZenML ZenML allows linking artifacts generated during pipeline runs to models for lineage tracking and transparency in training, evaluation, and inference processes. ## Configuring the Model at Pipeline Level You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorators: ```python from zenml import Model, pipeline model = Model(name="my_model", version="1.0.0") @pipeline(model=model) def my_pipeline(): ... ``` This links all artifacts from the pipeline run to the specified model. ## Saving Intermediate Artifacts To save intermediate results, use the `save_artifact` utility function. If the step has a Model context configured, it will automatically link to it. ```python from zenml import step, Model from zenml.artifacts.utils import save_artifact import pandas as pd from typing_extensions import Annotated from zenml.artifacts.artifact_config import ArtifactConfig @step(model=Model(name="MyModel", version="1.2.42")) def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig("trained_model")]: for epoch in epochs: checkpoint = model.train(epoch) save_artifact(data=checkpoint, name="training_checkpoint", version=f"1.2.42_{epoch}") return model ``` ## Explicitly Linking Artifacts To link an artifact to a model outside of a step context, use the `link_artifact_to_model` function. ```python from zenml import step, Model, link_artifact_to_model, save_artifact from zenml.client import Client @step def f_() -> None: new_artifact = save_artifact(data="Hello, World!", name="manual_artifact") link_artifact_to_model(artifact_version_id=new_artifact.id, model=Model(name="MyModel", version="0.0.42")) existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_artifact") link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` This allows for flexibility in linking artifacts to models, whether within steps or externally. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/delete-a-model.md === ### Summary: Deleting Models in ZenML This documentation outlines the process for deleting models and their versions in ZenML, which involves removing all links to artifacts, pipeline runs, and associated metadata. #### Deleting All Versions of a Model - **CLI Command:** ```shell zenml model delete ``` - **Python SDK:** ```python from zenml.client import Client Client().delete_model() ``` #### Deleting a Specific Version of a Model - **CLI Command:** ```shell zenml model version delete ``` - **Python SDK:** ```python from zenml.client import Client Client().delete_model_version() ``` For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/model-versions.md === # Model Versions Overview Model versions in ZenML allow tracking of different iterations of a machine learning model, facilitating the full ML lifecycle with dashboard and API functionalities. Users can associate model versions with various stages (e.g., production, staging) and link them to non-technical artifacts like datasets. Model versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. ## Explicitly Naming Model Versions To explicitly name a model version: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="1.0.5") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` If a model version exists, it is automatically associated with the pipeline. ## Templated Naming for Model Versions For continuous projects, use templated naming for unique and semantically readable model versions: ```python from zenml import Model, step, pipeline model = Model(name="{team}_my_model", version="experiment_with_phi_3_{date}_{time}") @step(model=model) def llm_trainer(...) -> ...: ... @pipeline(model=model, substitutions={"team": "Team_A"}) def training_pipeline(...): # training happens here ``` This will generate model versions like `experiment_with_phi_3_2024_08_30_12_42_53`. Standard substitutions include `{date}` and `{time}`. ## Fetching Model Versions by Stage Assign stages to model versions (e.g., `production`, `staging`) for easier retrieval: ```shell zenml model version update MODEL_NAME --stage=STAGE ``` To fetch a model version by stage: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` ## Autonumbering of Versions ZenML automatically numbers model versions. If no version is specified, a new version is generated: ```python from zenml import Model, step model = Model(name="my_model", version="even_better_version") @step(model=model) def svc_trainer(...) -> ...: ... ``` ZenML tracks the version sequence: ```python from zenml import Model earlier_version = Model(name="my_model", version="really_good_version").number # == 5 updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` This ensures proper versioning and iteration tracking throughout the model's lifecycle. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === ### Structuring an MLOps Project with ZenML This documentation outlines how to structure an MLOps project using ZenML, focusing on the integration of artifacts, models, and pipelines. #### Key Components: 1. **Pipelines**: MLOps projects typically consist of multiple pipelines, including: - **Feature Engineering Pipeline**: Prepares raw data. - **Training Pipeline**: Trains models using processed data. - **Inference Pipeline**: Runs predictions using trained models. - **Deployment Pipeline**: Deploys models to production. The structure of these pipelines can vary based on project requirements, and they often need to share information such as artifacts and metadata. #### Common Patterns for Artifact Exchange: **Pattern 1: Artifact Exchange via `Client`** - Use the ZenML Client to exchange artifacts between pipelines. For example, a feature engineering pipeline produces datasets that the training pipeline consumes. ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` *Note*: Artifacts are referenced, not materialized in memory during pipeline execution. **Pattern 2: Artifact Exchange via `Model`** - Use ZenML Models as references for artifact exchange. For instance, a training pipeline (`train_and_promote`) generates models, while an inference pipeline (`do_predictions`) uses the latest promoted model without needing to know artifact IDs. ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` *Alternative Approach*: Resolve the model artifact at the pipeline level to avoid caching issues. ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model inference_data = load_data() predict(model=model.get_model_artifact("trained_model"), data=inference_data) if __name__ == "__main__": do_predictions() ``` #### Conclusion Both artifact exchange patterns are valid; the choice depends on project needs and preferences. For more details on setting up a ZenML project, refer to the [best practices](https://docs.zenml.io). ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-a-model-in-code.md === # Summary of ZenML Model Loading Documentation ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline You can load the active model within a pipeline to access model metadata and associated artifacts. ```python from zenml import step, pipeline, get_step_context, Model @pipeline(model=Model(name="my_model")) def my_pipeline(): ... @step def my_step(): mv = get_step_context().model # Get model from active step context print(mv.run_metadata["metadata_key"].value) # Access metadata output = mv.get_artifact("my_dataset", "my_version") # Fetch artifact output.run_metadata["accuracy"].value ``` ### 2. Load Any Model via the Client You can also load a model using the `Client` to retrieve specific model versions. ```python from zenml import step from zenml.client import Client from zenml.enums import ModelStages @step def model_evaluator_step(): try: staging_zenml_model = Client().get_model_version( model_name_or_id="", model_version_name_or_number_or_id=ModelStages.STAGING, ) except KeyError: staging_zenml_model = None ``` This documentation outlines two methods for loading models in ZenML: using the active model in a pipeline and utilizing the `Client` to access any model version. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md === # Model Registration in ZenML Models can be registered in ZenML through various methods: explicit registration via CLI, Python SDK, or implicit registration during a pipeline run. ZenML Pro users can also utilize a dashboard interface for model registration. ## Explicit CLI Registration To register a model using the CLI, use the following command: ```bash zenml model register iris_logistic_regression --license=... --description=... ``` For additional options, run `zenml model register --help`. You can also associate tags using the `--tag` option. ## Explicit Dashboard Registration ZenML Pro users can register models directly from the cloud dashboard interface. ## Explicit Python SDK Registration To register a model with the Python SDK, use: ```python from zenml.client import Client Client().create_model( name="iris_logistic_regression", license="Copyright (c) ZenML GmbH 2023", description="Logistic regression model trained on the Iris dataset.", tags=["regression", "sklearn", "iris"], ) ``` ## Implicit Registration by ZenML Models are commonly registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator. Here’s an example of a training pipeline: ```python from zenml import pipeline, Model @pipeline( enable_cache=False, model=Model( name="demo", license="Apache", description="Show case Model Control Plane.", ), ) def train_and_promote_model(): ... ``` Running this pipeline creates a new model version and links it to the artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === # Summary of Loading Artifacts from a Model This documentation explains how to load artifacts between pipelines in a machine learning project using ZenML. It focuses on a two-pipeline setup where the first pipeline trains a model, and the second pipeline performs batch inference using the trained model artifacts. ## Key Points: 1. **Model Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is evaluated at runtime, not during pipeline compilation. 2. **Artifact Loading**: - Use `model.get_model_artifact("trained_model")` to load the trained model artifact during inference. - The artifact retrieval is delayed until the step is executed. 3. **Alternative Method**: You can also use the `Client` class to directly fetch the model version: ```python from zenml.client import Client @pipeline def do_predictions(): model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) inference_data = load_data() predict( model=model.get_model_artifact("trained_model"), data=inference_data, ) ``` 4. **Execution Timing**: In both methods, the actual artifact evaluation occurs during the step execution, ensuring that the most current model version is used. This concise approach ensures that critical information about loading artifacts in ZenML pipelines is retained while eliminating redundancy. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/associate-a-pipeline-with-a-model.md === # Summary: Associating a Pipeline with a Model To associate a pipeline with a model in ZenML, use the following code structure: ```python from zenml import pipeline from zenml import Model from zenml.enums import ModelStages @pipeline( model=Model( name="ClassificationModel", # Unique model name tags=["MVP", "Tabular"], # Tags for filtering version=ModelStages.LATEST # Specify model version or stage ) ) def my_pipeline(): ... ``` This code associates the pipeline with the specified model. If the model exists, a new version is created. To attach the pipeline to an existing model version, specify the version accordingly. Additionally, model configuration can be stored in a configuration file: ```yaml model: name: text_classifier description: A breast cancer classifier tags: ["classifier", "sgd"] ``` This allows for better management and organization of model attributes. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/README.md === # Use the Model Control Plane A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and business data, encapsulating the logic of ML products. It can be viewed as a "project" or "workspace." **Key Points:** - The technical model (model file/files with weights and parameters) is a primary artifact associated with a ZenML Model, but other artifacts like training data and production predictions are also included. - Models are first-class entities in ZenML, accessible through the ZenML API, client, and the ZenML Pro dashboard. - Models capture lineage information and support version staging, allowing users to manage predictions based on specific stages (e.g., `Production`) and apply business rules for version promotion. - The Model Control Plane provides a unified interface for managing models, integrating pipelines, artifacts, and business data with the technical model. For a complete example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). ================================================== === File: docs/book/how-to/advanced-topics/README.md === # Advanced Topics in ZenML This section discusses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. ## Key Features 1. **Custom Components**: Users can create and integrate custom components into their pipelines, allowing for tailored data processing and model training. 2. **Pipeline Configuration**: Advanced configurations enable users to define pipeline parameters, execution environments, and resource allocation for optimized performance. 3. **Artifact Management**: ZenML supports versioning and management of artifacts, ensuring reproducibility and traceability of experiments. 4. **Integration with ML Tools**: ZenML can be integrated with various machine learning tools and platforms, facilitating seamless workflows. 5. **Monitoring and Logging**: Users can implement monitoring and logging to track pipeline performance and troubleshoot issues effectively. ## Example Code Snippet ```python from zenml.pipelines import pipeline from zenml.steps import step @step def data_preprocessing(): # Data preprocessing logic pass @step def model_training(data): # Model training logic pass @pipeline def my_pipeline(): data = data_preprocessing() model_training(data) ``` This concise overview of advanced topics in ZenML highlights essential features and capabilities, enabling users to leverage the framework effectively for complex machine learning workflows. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === ### Summary of ZenML Documentation on Using Prebuilt Docker Images **Overview**: This documentation explains how to skip building a Docker image for ZenML pipelines by using a prebuilt image, which can save time and costs during pipeline execution. **Key Points**: - **Docker Image Building**: Normally, ZenML builds a Docker image with a base ZenML image and project dependencies. This can be time-consuming due to pulling base layers and pushing the final image. - **Prebuilt Image Usage**: To avoid building an image, you can specify a prebuilt image in the `DockerSettings` class by setting the `parent_image` attribute and `skip_build` to `True`. **Code Example**: ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` - Ensure the prebuilt image is available in a registry accessible by the orchestrator. **Requirements for the Parent Image**: 1. **Dependencies**: The prebuilt image must contain all necessary dependencies for the pipeline to run. 2. **Stack Requirements**: Use the following code to retrieve stack requirements: ```python from zenml.client import Client stack_name = Client().set_active_stack(stack_name) active_stack = Client().active_stack stack_requirements = active_stack.requirements() ``` 3. **Integration Requirements**: Gather integration dependencies using: ```python from zenml.integrations.registry import integration_registry from zenml.integrations.constants import HUGGINGFACE, PYTORCH required_integrations = [PYTORCH, HUGGINGFACE] integration_requirements = set( itertools.chain.from_iterable( integration_registry.select_integration_requirements( integration_name=integration, target_os=OperatingSystemType.LINUX, ) for integration in required_integrations ) ) ``` 4. **Project-Specific Requirements**: Install project dependencies via: ```Dockerfile RUN pip install -r FILE ``` 5. **System Packages**: Include necessary `apt` packages: ```Dockerfile RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES ``` 6. **Project Code Files**: Ensure your pipeline code is available: - If a code repository is registered, ZenML will fetch the code. - If `allow_download_from_artifact_store` is `True`, ZenML will upload code to the artifact store. - If both options are disabled, include code files directly in the image (not recommended). **Additional Notes**: - Ensure Python, `pip`, and `zenml` are installed in the image. - Using a prebuilt image limits the ability to leverage updates to code or dependencies unless included in the image. For further details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === ### ZenML Image Building Overview ZenML determines the root directory of source files in the following order: 1. If `zenml init` has been run in the current or parent directory, that directory is the root. 2. If not, the parent directory of the executing Python file is used as the source root. For managing files in the root directory, use the following attributes in `DockerSettings`: - **`allow_download_from_code_repository`**: If `True` and the files are in a registered code repository with no local changes, files are downloaded from the repository instead of being included in the image. - **`allow_download_from_artifact_store`**: If the previous option is `False` or no suitable repository exists, and this is `True`, ZenML uploads your code to the artifact store. - **`allow_including_files_in_images`**: If both previous options are `False`, and this is `True`, files are included in the Docker image, requiring a new image build for any code changes. **Warning**: Setting all attributes to `False` is not recommended, as it may lead to unexpected behavior. You must ensure all files are correctly placed in the Docker images used for pipeline execution. ### File Management - **Excluding Files**: To exclude files when downloading from a repository, use a `.gitignore` file. - **Including Files**: When including files, ZenML copies all contents of the root directory into the Docker image. To exclude files and reduce image size, use a `.dockerignore` file, which can be specified in two ways: - Place a `.dockerignore` file in the source root directory. - Specify a `.dockerignore` file explicitly: ```python docker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` This setup allows for efficient management of files in ZenML Docker images. ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md === ### Summary of Docker Settings Customization in ZenML ZenML allows customization of Docker settings at the step level, enabling the use of different Docker images for specific steps in a pipeline. By default, all steps utilize the Docker image defined at the pipeline level. #### Customizing Docker Settings in a Step To customize Docker settings for a step, use the `DockerSettings` in the step decorator: ```python from zenml import step from zenml.config import DockerSettings @step( settings={ "docker": DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime" ) } ) def training(...): ... ``` #### Alternative Configuration via YAML Docker settings can also be specified in a configuration file: ```yaml steps: training: settings: docker: parent_image: pytorch/pytorch:2.2.0-cuda11.8-cudnn8-runtime required_integrations: - gcp - github requirements: - zenml - numpy ``` For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md === ### Using a Private PyPI Repository To use a private PyPI repository that requires authentication, follow these steps: 1. **Store Credentials Securely**: Use environment variables for credentials. 2. **Configure Package Managers**: Set up `pip` or `poetry` to use these credentials for package installations. 3. **Custom Docker Images**: Consider using Docker images with the necessary authentication configured. #### Example Code for Authentication Setup ```python import os from my_simple_package import important_function from zenml.config import DockerSettings from zenml import step, pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ['PYPI_TOKEN']}@my-private-pypi-server.com/{os.environ['PYPI_USERNAME']}/"} ) @step def my_step(): return important_function() @pipeline(settings={"docker": docker_settings}) def my_pipeline(): my_step() if __name__ == "__main__": my_pipeline() ``` **Important Note**: Handle credentials with care, using secure methods for management and distribution within your team. ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md === # Reusing Builds in ZenML ## Overview ZenML allows for the reuse of builds to enhance pipeline run efficiency. When a pipeline is executed, ZenML checks for an existing build that matches the pipeline and stack; if found, it reuses it; otherwise, a new build is created. ## What is a Build? A build encapsulates a pipeline and its associated stack, including Docker images with all necessary requirements. It may also contain the pipeline code. ### CLI Commands - **List Builds:** ```bash zenml pipeline builds list --pipeline_id='startswith:ab53ca' ``` - **Create a Build:** ```bash zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance ``` ## Reusing Builds ZenML automatically finds existing builds. You can specify a build ID in the pipeline configuration to force the use of a specific build. Note that using a specific build will execute the code in the Docker image, not the local code. To include local changes, disconnect your code from the build by either registering a code repository or using the artifact store. ## Artifact Store ZenML can upload your code to the artifact store by default if no code repository is detected. This allows for code reuse without needing to rebuild Docker images. ## Code Repositories Connecting a git repository can significantly speed up Docker builds. When a pipeline is run from a local repository, ZenML builds Docker images without including source files and downloads them into the container before execution. This method allows for the reuse of images built by colleagues. ### Integration Installation To utilize a code repository, ensure the relevant integrations are installed: ```sh zenml integration install github ``` ## Detecting Local Code Repositories ZenML checks if the files used in a pipeline are tracked in registered code repositories. This involves computing the source root and verifying its inclusion in a local checkout. ## Tracking Code Versions If a local repository is detected, ZenML stores a reference to the current commit for the pipeline run, ensuring reproducibility. This reference is only tracked if the local checkout is clean. ## Best Practices - Ensure your local checkout is clean and the latest commit is pushed to avoid file download failures. - For options to disable or enforce file downloads, refer to the [Docker settings documentation](./docker-settings-on-a-pipeline.md). By following these guidelines, you can effectively reuse builds and optimize your ZenML pipeline runs. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === # Summary of Specifying Pip Dependencies and Apt Packages ## Overview The configuration for specifying pip and apt dependencies is applicable only in remote pipelines, as local pipelines do not utilize Docker images. When a pipeline runs with a remote orchestrator, a Dockerfile is generated at runtime to build the Docker image. ## Key Points - **DockerSettings Import**: Use `from zenml.config import DockerSettings`. - **Automatic Package Installation**: ZenML installs all packages required by the active stack by default. ### Methods to Specify Packages 1. **Replicate Local Environment**: ```python docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") ``` 2. **Custom Command for Requirements**: ```python docker_settings = DockerSettings(replicate_local_python_environment=["poetry", "export", "--extras=train", "--format=requirements.txt"]) ``` 3. **Specify Requirements in Code**: ```python docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) ``` 4. **Specify a Requirements File**: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") ``` 5. **Specify ZenML Integrations**: ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) ``` 6. **Specify Apt Packages**: ```python docker_settings = DockerSettings(apt_packages=["git"]) ``` 7. **Prevent Automatic Installation**: ```python docker_settings = DockerSettings(install_stack_requirements=False) ``` 8. **Custom Docker Settings for Steps**: ```python docker_settings = DockerSettings(requirements=["tensorflow"]) ``` ### Installation Order ZenML installs packages in the following order: - Local Python environment packages - Stack requirements (unless disabled) - Required integrations - Specified requirements ### Additional Installer Arguments To customize the package installer: ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) ``` ### Experimental Feature: Using `uv` To use `uv` for faster package installation: ```python docker_settings = DockerSettings(python_package_installer="uv") ``` Note: `uv` is experimental and may lead to installation errors; switch back to `pip` if issues arise. ### Documentation Reference For more details on using `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ================================================== === File: docs/book/how-to/customize-docker-builds/use-your-own-docker-files.md === ### Summary of ZenML Docker Integration ZenML allows users to specify custom Dockerfiles, build contexts, and build options for dynamic image creation during pipeline execution. The build process operates as follows: - **No Dockerfile Specified**: If the pipeline requires an image (due to requirements, environment variables, or file copying), ZenML builds a new image. Otherwise, it uses the specified `parent_image`. - **Dockerfile Specified**: ZenML builds an image from the provided Dockerfile. If additional requirements necessitate another image, ZenML builds a second image; otherwise, it uses the first image for the pipeline. The `DockerSettings` configuration determines the order of package installations: 1. Local Python environment packages. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. **Note**: The intermediate image may also be used directly to execute pipeline steps. ### Example Code ```python docker_settings = DockerSettings( dockerfile="/path/to/dockerfile", build_context_root="/path/to/build/context", parent_image_build_config={ "build_options": ..., "dockerignore": ... } ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === ### ZenML Image Builder Overview ZenML allows for the execution of pipeline steps in a local Python environment or through remote orchestrators and step operators. When using remote setups, ZenML builds Docker images to ensure a consistent and isolated execution environment. #### Key Points: - **Execution Environment**: By default, ZenML uses the local Docker client to create execution environments, necessitating Docker installation and permissions. - **Image Builders**: ZenML provides specialized image builders as stack components to build and push Docker images in dedicated environments. - **Local Image Builder**: If no specific image builder is configured, ZenML defaults to the local image builder, ensuring consistency across builds. - **Integration**: Users do not need to directly interact with image builders; as long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by components requiring container image builds. For more details, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md === ### Summary: Using Docker Images to Run Your Pipeline This documentation outlines how to configure Docker settings for running pipelines in ZenML, particularly when using a remote orchestrator. A Dockerfile is dynamically generated at runtime to build a Docker image, following these key steps: 1. **Base Image**: Starts from a parent image with ZenML installed, typically the official ZenML image. Custom base images can be specified. 2. **Dependency Installation**: Automatically installs required pip dependencies based on the integrations used in the stack. Custom dependencies can be included as needed. 3. **Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. For detailed configuration options, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). ### Configuring Docker Settings Docker settings can be customized using the `DockerSettings` class: ```python from zenml.config import DockerSettings ``` #### Pipeline-Level Configuration Apply settings to all steps in a pipeline: ```python docker_settings = DockerSettings() @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() ``` #### Step-Level Configuration For more granular control, configure settings for individual steps: ```python @step(settings={"docker": docker_settings}) def my_step() -> None: pass ``` #### YAML Configuration Settings can also be specified in a YAML file: ```yaml settings: docker: ... steps: step_name: settings: docker: ... ``` ### Specifying Docker Build Options To pass build options to the image builder: ```python docker_settings = DockerSettings(build_config={"build_options": {...}}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Note**: For MacOS ARM architecture, specify the target platform: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) ``` ### Using a Custom Parent Image You can specify a custom pre-built parent image or a Dockerfile for more control over the environment. Ensure the image has Python, pip, and ZenML installed. #### Pre-Built Parent Image To use a static parent image: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` To skip Docker builds: ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) ``` **Warning**: This advanced feature may lead to unintended behavior. Ensure that your code files are included in the specified image. For further details, refer to the complete documentation available at [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === ### Using Docker Images to Run Your Pipeline ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in an isolated environment. This section covers how to customize the Docker build process. **Key Points:** - **Local Execution:** Steps run sequentially in the active Python environment. - **Remote Execution:** Docker images are created for isolated execution. - **Customization:** Users can control the dockerization process for their pipelines. For more details on orchestrators and step operators, refer to the respective guides. ================================================== === File: docs/book/how-to/pipeline-development/README.md === # Pipeline Development in ZenML This section outlines the key aspects of developing pipelines in ZenML, a framework designed for building reproducible machine learning workflows. ## Key Components 1. **Pipelines**: Define a sequence of steps (components) that process data and produce outputs. Each pipeline can be parameterized and reused. 2. **Components**: Individual units of work within a pipeline, such as data ingestion, preprocessing, model training, and evaluation. Components can be implemented as Python functions or classes. 3. **Data Management**: ZenML integrates with various data storage solutions, allowing for seamless data handling across different stages of the pipeline. 4. **Artifact Management**: Outputs from components (artifacts) are tracked and stored, enabling reproducibility and versioning of results. 5. **Orchestrators**: ZenML supports multiple orchestrators (e.g., Apache Airflow, Kubeflow) for executing pipelines, allowing users to choose based on their infrastructure needs. 6. **Integrations**: ZenML provides integrations with popular machine learning libraries (e.g., TensorFlow, PyTorch) and tools (e.g., MLflow, S3) to enhance functionality. ## Example Code Snippet ```python from zenml.pipelines import pipeline from zenml.steps import step @step def data_ingestion(): # Load and return data pass @step def model_training(data): # Train model and return trained model pass @pipeline def training_pipeline(): data = data_ingestion() model = model_training(data) ``` ## Important Considerations - **Reproducibility**: Ensure that all components are designed to be deterministic and that artifacts are versioned. - **Modularity**: Build components that can be reused across different pipelines to promote efficiency. - **Testing**: Implement unit tests for components to validate functionality before integration into pipelines. This concise overview provides essential information for developing pipelines using ZenML, focusing on components, data management, and integration capabilities. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md === # Limitations of Defining Steps in Notebook Cells To run ZenML steps defined in notebook cells remotely (using a remote orchestrator or step operator), the following conditions must be met: - The cell can only contain Python code; Jupyter magic commands (`%`) or shell commands (`!`) are not permitted. - The cell must not call code from other notebook cells. However, functions or classes imported from Python files are allowed. - The cell must handle all necessary imports independently, including ZenML imports like `from zenml import step`. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md === ### Summary of Running a Single Step from a Notebook To execute a single step remotely from a notebook using ZenML, you can call the step like a standard Python function. ZenML will create a pipeline with that step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining steps in notebook cells. #### Example Code ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc # Sample data X_train = pd.DataFrame(...) y_train = pd.Series(...) # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` ### Key Points - The step can be executed directly in a notebook. - Ensure to handle limitations when defining steps in notebook cells. - The example demonstrates training a Support Vector Classifier (SVC) using provided training data. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/README.md === ### Summary: Running Remote Pipelines from Jupyter Notebooks ZenML allows users to define and execute steps and pipelines within Jupyter notebooks remotely. The process involves extracting code from notebook cells and running it as Python modules in Docker containers. **Key Points:** - **Execution Environment:** Notebook cells must adhere to specific conditions for ZenML to run them remotely. - **Limitations:** There are certain limitations when defining steps in notebook cells. Refer to the documentation for details. - **Single Step Execution:** Users can run individual steps from a notebook. More information is available in the relevant section. **Related Documentation:** - [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) - [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === ### Summary of ZenML Configuration Template Documentation To create a configuration file for your ZenML pipeline, you can autogenerate a YAML template using the `.write_run_configuration_template()` method. This method generates a YAML file with all options commented out, allowing you to select relevant settings. #### Example Code ```python from zenml import pipeline @pipeline(enable_cache=True) def simple_ml_pipeline(parameter: int): dataset = load_data(parameter=parameter) train_model(dataset) simple_ml_pipeline.write_run_configuration_template(path="") ``` #### Generated YAML Configuration Template Structure The generated YAML template includes various sections, such as: - **Pipeline Settings** - `build`: Pipeline build configuration. - `enable_artifact_metadata`: Optional boolean. - `enable_cache`: Optional boolean. - `model`: Contains metadata about the model (e.g., `name`, `tags`, `version`). - **Parameters** - `parameters`: Optional mapping for pipeline parameters. - `run_name`: Optional string for naming the run. - **Schedule** - `schedule`: Configuration for scheduling runs (e.g., `cron_expression`, `catchup`). - **Settings** - **Docker Settings**: Configuration for Docker environment (e.g., `apt_packages`, `parent_image`). - **Resource Allocation**: CPU and GPU counts, memory specifications. - **Steps** - Each step (e.g., `load_data`, `train_model`) includes: - Metadata options (e.g., `enable_step_logs`, `experiment_tracker`). - Model configuration similar to the main model section. - Docker settings and resource allocations specific to the step. #### Additional Configuration You can also specify a stack when generating the template using: ```python simple_ml_pipeline.write_run_configuration_template(stack=) ``` For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-configuration.md === # Summary of ZenML Settings Configuration ## Overview ZenML allows configuration of runtime settings for stack components and pipelines through a central concept called `BaseSettings`. These settings enable customization of resources, containerization, and stack component-specific configurations. ## Types of Settings 1. **General Settings**: Applicable to all ZenML pipelines. - Examples: - `DockerSettings`: Configure Docker settings. - `ResourceSettings`: Specify resource requirements. 2. **Stack-Component-Specific Settings**: Provide runtime configurations for specific components. The key format is `` or `.`. - Examples: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` - `MLflowExperimentTrackerSettings` - `WandbExperimentTrackerSettings` - `WhylogsDataValidatorSettings` - `SagemakerStepOperatorSettings` - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` ## Registration-Time vs Real-Time Settings - **Registration-Time Settings**: Static configurations set during component registration (e.g., `tracking_url` for MLflow). - **Real-Time Settings**: Dynamic configurations that can change with each pipeline run (e.g., `experiment_name`). Default values can be set during registration but can be overridden at runtime. ## Key Specification for Settings When specifying stack-component-specific settings, use the correct key format. If only the category is provided, ZenML applies settings to the corresponding flavor of the component. If incompatible, the settings will be ignored. ## Code Examples ### Python ```python @step(step_operator="nameofstepoperator", settings={"step_operator": {"estimator_args": {"instance_type": "m7g.medium"}}}) def my_step(): ... # Using the class @step(step_operator="nameofstepoperator", settings={"step_operator": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... ``` ### YAML ```yaml steps: my_step: step_operator: "nameofstepoperator" settings: step_operator: estimator_args: instance_type: m7g.medium ``` This documentation provides a comprehensive guide to configuring runtime settings in ZenML, ensuring that users can effectively manage their pipeline configurations. For the latest information, refer to the [up-to-date ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md === To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. ### Code Example: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run() # General configuration for the pipeline pipeline_run.config # Configuration for a specific step pipeline_run.steps[].config ``` This allows you to retrieve both the overall pipeline configuration and the configuration for individual steps. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/how-to-use-config.md === ### ZenML Configuration Files **Overview**: ZenML allows configuration through YAML files, which is recommended for separating configuration from code. **Configuration Example**: - A YAML file can specify pipeline parameters and step configurations. **YAML Configuration**: ```yaml enable_cache: False parameters: dataset_name: "best_dataset" steps: load_data: enable_cache: False ``` **Python Code Example**: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": simple_ml_pipeline.with_options(config_path=)() ``` **Functionality**: The above code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to `best_dataset`. **Note**: For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/configuration-hierarchy.md === ### Configuration Hierarchy in ZenML In ZenML, configurations can be set at multiple levels, with specific rules governing their precedence: 1. **Code Configurations**: Override YAML file configurations. 2. **Step-Level Configurations**: Override pipeline-level configurations. 3. **Attribute Merging**: Dictionaries are merged when attributes are involved. #### Example Code ```python from zenml import pipeline, step from zenml.config import ResourceSettings @step def load_data(parameter: int) -> dict: ... @step(settings={"resources": ResourceSettings(gpu_count=1, memory="2GB")}) def train_model(data: dict) -> None: ... @pipeline(settings={"resources": ResourceSettings(cpu_count=2, memory="1GB")}) def simple_ml_pipeline(parameter: int): ... # Configuration results after merging train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count=1, memory="2GB" simple_ml_pipeline.configuration.settings["resources"] # -> cpu_count: 2, memory="1GB" ``` This example illustrates how ZenML merges configurations, using step settings to override pipeline settings where applicable. For the `train_model` step, the final resource settings reflect both step and pipeline configurations. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md === # Configuration Overview This documentation outlines the configuration options available in a YAML file for a ZenML pipeline. Below is a summary of key sections and parameters. ## Sample YAML Configuration ```yaml build: dcd6fafb-c200-4e85-8328-428bef98d804 enable_artifact_metadata: True enable_artifact_visualization: False enable_cache: False enable_step_logs: True extra: any_param: 1 another_random_key: "some_string" model: name: "classification_model" version: production audience: "Data scientists" description: "This classifies hotdogs and not hotdogs" ethics: "No ethical implications" license: "Apache 2.0" limitations: "Only works for hotdogs" tags: ["sklearn", "hotdog", "classification"] parameters: dataset_name: "another_dataset" run_name: "my_great_run" schedule: catchup: true cron_expression: "* * * * *" settings: docker: apt_packages: ["curl"] copy_files: True dockerfile: "Dockerfile" dockerignore: ".dockerignore" environment: ZENML_LOGGING_VERBOSITY: DEBUG parent_image: "zenml-io/zenml-cuda" requirements: ["torch"] skip_build: False resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" steps: train_model: parameters: data_source: "best_dataset" experiment_tracker: "mlflow_production" step_operator: "vertex_gpu" outputs: {} failure_hook_source: {} success_hook_source: {} enable_artifact_metadata: True enable_artifact_visualization: True enable_cache: False enable_step_logs: True extra: {} model: {} settings: docker: {} resources: {} step_operator.sagemaker: estimator_args: instance_type: m7g.medium ``` ## Key Configuration Parameters ### `enable_XXX` Flags - **`enable_artifact_metadata`**: Attach metadata to artifacts. - **`enable_artifact_visualization`**: Attach visualizations of artifacts. - **`enable_cache`**: Enable caching. - **`enable_step_logs`**: Enable step logs. ### `build` ID Specifies the UUID of the Docker image to use. If provided, Docker image building is skipped. ### `model` Defines the ZenML model for the pipeline. ```yaml model: name: "ModelName" version: "production" description: An example model tags: ["classifier"] ``` ### Pipeline and Step `parameters` Parameters can be defined at both pipeline and step levels, with step-level parameters taking precedence. ```yaml parameters: gamma: 0.01 steps: trainer: parameters: gamma: 0.001 ``` ### Setting the `run_name` The run name must be unique for each execution. ```yaml run_name: ``` ### Stack Component Runtime Settings Settings for Docker and resources can be specified under `settings`. ```yaml settings: docker: requirements: - pandas resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" ``` ### Step-Specific Configuration Certain configurations apply only at the step level: - **`experiment_tracker`**: Name of the experiment tracker for the step. - **`step_operator`**: Name of the step operator for the step. - **`outputs`**: Configuration for output artifacts, including materializer source paths. ### Hooks Specify `failure_hook_source` and `success_hook_source` for handling step outcomes. This summary captures the essential configuration details and structure for setting up a ZenML pipeline using a YAML file. For further information, refer to the respective sections in the documentation. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/README.md === ZenML allows for the configuration and execution of pipelines using YAML files, enabling runtime adjustments for parameters, caching behavior, and stack components. Key topics include: - **Configurable Options**: Details on what can be configured in a pipeline. - **Configuration Hierarchy**: Structure and precedence of configuration settings. - **Template Generation**: Instructions for autogenerating a template YAML file. For further details, refer to the following sections: - [What can be configured](what-can-be-configured.md) - [Configuration hierarchy](configuration-hierarchy.md) - [Autogenerate a template YAML file](autogenerate-a-template-yaml-file.md) This streamlined approach simplifies the management of pipeline configurations in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md === ### Summary: Creating Pipeline Variants in ZenML ZenML allows you to create different variants of your pipelines for local development and production, enhancing development speed while maintaining production integrity. You can achieve this through: 1. **Configuration Files** 2. **Code Implementation** 3. **Environment Variables** #### 1. Using Configuration Files You can specify pipeline configurations using YAML files. For example, a development configuration might look like this: ```yaml enable_cache: False parameters: dataset_name: "small_dataset" steps: load_data: enable_cache: False ``` To apply this configuration in your pipeline: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": ml_pipeline.with_options(config_path="path/to/config.yaml")() ``` You can create separate files for development (`config_dev.yaml`) and production (`config_prod.yaml`). #### 2. Implementing Variants in Code You can also define pipeline variants directly in your code: ```python import os from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(is_dev: bool = False): dataset = "small_dataset" if is_dev else "full_dataset" load_data(dataset) if __name__ == "__main__": is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" ml_pipeline(is_dev=is_dev) ``` This method uses a boolean flag to switch between environments. #### 3. Using Environment Variables You can determine which configuration to use based on environment variables: ```python import os config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" ml_pipeline.with_options(config_path=config_path)() ``` Run the pipeline with: - `ZENML_ENVIRONMENT=dev python run.py` - `ZENML_ENVIRONMENT=prod python run.py` ### Development Variant Considerations For development variants, optimize for faster iteration: - Use smaller datasets - Specify a local execution stack - Reduce training epochs and batch size - Use smaller base models Example configuration for development: ```yaml parameters: dataset_path: "data/small_dataset.csv" epochs: 1 batch_size: 16 stack: local_stack ``` Or in code: ```python @pipeline def ml_pipeline(is_dev: bool = False): dataset = "data/small_dataset.csv" if is_dev else "data/full_dataset.csv" epochs = 1 if is_dev else 100 batch_size = 16 if is_dev else 64 load_data(dataset) train_model(epochs=epochs, batch_size=batch_size) ``` By creating these variants, you can efficiently test and debug locally while maintaining a robust production setup. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md === ### Summary of ZenML Documentation on Keeping Pipeline Runs Clean #### Overview This documentation provides strategies to maintain a clean development environment while working with ZenML pipelines, preventing clutter in the dashboard and server. #### Key Strategies 1. **Run Locally**: - To avoid cluttering a shared server, disconnect and run a local server: ```bash zenml login --local ``` - Reconnect to the remote server with: ```bash zenml login ``` 2. **Pipeline Runs**: - **Unlisted Runs**: Create runs without associating them with a pipeline: ```python pipeline_instance.run(unlisted=True) ``` - **Deleting Runs**: - Delete a specific run: ```bash zenml pipeline runs delete ``` - Delete all runs from the last 24 hours: ```python import datetime from zenml.client import Client def delete_recent_pipeline_runs(): zc = Client() time_filter = (datetime.datetime.utcnow() - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") for run in recent_runs: zc.delete_pipeline_run(run.id) print(f"Deleted {len(recent_runs)} pipeline runs.") ``` 3. **Pipelines**: - **Deleting Pipelines**: ```bash zenml pipeline delete ``` - **Unique Pipeline Names**: Assign unique names to runs: ```python training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") training_pipeline() ``` 4. **Models**: - To delete a model: ```bash zenml model delete ``` 5. **Artifacts**: - **Pruning Artifacts**: ```bash zenml artifact prune ``` - Control deletion behavior with `--only-artifact` and `--only-metadata` flags. 6. **Cleaning Environment**: - Use `zenml clean` to delete all local pipelines, runs, and artifacts: ```bash zenml clean --local ``` By following these practices, users can maintain a clean and organized pipeline dashboard, focusing on relevant runs for their projects. For more details, refer to the full ZenML documentation. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === # Develop Locally This section outlines best practices for developing pipelines locally, allowing for faster iteration and cost-effective testing. It is common to work with a smaller subset of data or synthetic data during local development. ZenML supports this approach, enabling users to develop locally and later push and run pipelines on more powerful remote hardware. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.md === ### Summary of ZenML Documentation on Inspecting Pipeline Runs #### Overview This documentation covers how to inspect finished pipeline runs and their outputs in ZenML, including accessing artifacts, metadata, and lineage of runs. #### Pipeline Hierarchy - **Structure**: Pipelines have a 1-to-N relationship with runs, runs with steps, and steps with artifacts. ```mermaid flowchart LR pipelines -->|1:N| runs runs -->|1:N| steps steps -->|1:N| artifacts ``` #### Fetching Pipelines - **Get a Specific Pipeline**: ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` - **List All Pipelines**: - **Python**: ```python pipelines = Client().list_pipelines() ``` - **CLI**: ```shell zenml pipeline list ``` #### Pipeline Runs - **Get All Runs**: ```python runs = pipeline_model.runs ``` - **Get Last Run**: ```python last_run = pipeline_model.last_run # OR: pipeline_model.runs[0] ``` - **Execute Pipeline and Get Latest Run**: ```python run = training_pipeline() ``` - **Fetch Specific Run**: ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` #### Run Information - **Status**: ```python status = run.status # States: initialized, failed, completed, running, cached ``` - **Configuration**: ```python pipeline_config = run.config pipeline_settings = run.config.settings ``` - **Component-Specific Metadata**: ```python run_metadata = run.run_metadata orchestrator_url = run_metadata["orchestrator_url"].value ``` #### Steps and Artifacts - **Access Steps**: ```python steps = run.steps # Get all steps step = run.steps["first_step"] # Get specific step ``` - **Inspect Output Artifacts**: ```python output = step.outputs["output_name"] # Access by name output = step.output # If single output my_pytorch_model = output.load() # Load artifact ``` - **Fetch Artifacts Directly**: ```python artifact = Client().get_artifact('iris_dataset') output = artifact.versions['2022'] ``` #### Metadata and Visualizations - **Artifact Metadata**: ```python output_metadata = output.run_metadata storage_size_in_bytes = output_metadata["storage_size"].value ``` - **Visualizations**: ```python output.visualize() # Show visualizations in Jupyter ``` #### Fetching Information During Run Execution - **Access Previous Runs**: ```python from zenml import get_step_context from zenml.client import Client @step def my_step(): current_run_name = get_step_context().pipeline_run.name current_run = Client().get_pipeline_run(current_run_name) previous_run = current_run.pipeline.runs[1] # Get previous run ``` #### Code Example This example demonstrates loading a model from a pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) return X_train, X_test, y_train, y_test @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` This summary captures the essential technical details and code snippets necessary for understanding how to inspect pipeline runs in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/retry-steps.md === ### ZenML Step Retry Configuration ZenML includes a built-in mechanism to automatically retry steps upon failure, useful for handling transient errors, such as resource availability on GPU-backed hardware. You can configure retries using three parameters: - **max_retries:** Maximum retry attempts. - **delay:** Initial delay (in seconds) before the first retry. - **backoff:** Multiplier for the delay after each retry. #### Example Usage with @step Decorator You can define retry configurations directly in your step using the `@step` decorator: ```python from zenml.config.retry_config import StepRetryConfig @step( retry=StepRetryConfig( max_retries=3, delay=10, backoff=2 ) ) def my_step() -> None: raise Exception("This is a test exception") ``` #### Important Notes - Infinite retries are not supported. Setting `max_retries` to a large value or omitting it will still enforce an internal maximum to avoid infinite loops. It’s advisable to set a reasonable `max_retries` based on your use case. ### Related Documentation - [Failure/Success Hooks](use-failure-success-hooks.md) - [Configure Pipelines](../../pipeline-development/use-configuration-files/how-to-use-config.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === ### Summary of ZenML Fan-in and Fan-out Patterns Documentation **Overview**: The fan-in and fan-out pattern is a pipeline architecture used for parallel processing. It involves a single step that splits into multiple parallel operations (fan-out) and then consolidates the results into a single step (fan-in). This is beneficial for tasks like distributed workloads and data transformations. **Example Code**: ```python from zenml import step, get_step_context, pipeline from zenml.client import Client @step def load_step() -> str: return "Hello from ZenML!" @step def process_step(input_data: str) -> str: return input_data @step def combine_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) processed_results = {step_info.name: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) @pipeline(enable_cache=False) def fan_out_fan_in_pipeline(parallel_count: int) -> None: input_data = load_step() after = [process_step(input_data, id=f"process_{i}") for i in range(parallel_count)] combine_step(step_prefix="process_", output_name="output", after=after) fan_out_fan_in_pipeline(parallel_count=8) ``` **Key Points**: - **Fan-out**: Enables parallel processing, enhancing resource utilization. - **Fan-in**: Aggregates results from parallel steps, useful for various applications such as: - Parallel data processing - Distributed model training - Ensemble methods - Batch processing - Data validation - Hyperparameter tuning **Limitations**: 1. Steps may run sequentially if the orchestrator does not support parallel execution (e.g., local orchestrator). 2. The number of steps must be predetermined; dynamic step creation is not supported. **Important Note**: When implementing the fan-in step, results from previous parallel steps must be queried using the ZenML Client, as direct result passing is not allowed. For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md === To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method along with the `last_run` property or by indexing into the runs. Here's a concise example: ```python from zenml.client import Client client = Client() # Retrieve a pipeline by its name p = client.get_pipeline("mlflow_train_deploy_pipeline") # Get the latest run of this pipeline latest_run = p.last_run # Access runs by index first_run = p[0] ``` This code snippet demonstrates how to access the latest run and the first run of a specified pipeline. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/tag-your-pipeline-runs.md === # Tagging Pipeline Runs You can specify tags for your pipeline runs in the following ways: 1. **Configuration File**: ```yaml # config.yaml tags: - tag_in_config_file ``` 2. **Code**: - Using the `@pipeline` decorator: ```python @pipeline(tags=["tag_on_decorator"]) def my_pipeline(): ... ``` - Using the `with_options` method: ```python my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) ``` When you run the pipeline, tags from all specified locations will be merged and applied to the run. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === ### Summary of Hyperparameter Tuning with ZenML **Overview**: This documentation describes how to perform hyperparameter tuning using ZenML, specifically through a grid search method across a single hyperparameter dimension (learning rate). The process involves training models with different learning rates and selecting the best-performing model. **Key Components**: 1. **Steps**: - **`train_step`**: Trains a model using a specified learning rate. - **`selection_step`**: Evaluates trained models to determine the best hyperparameter based on performance. 2. **Pipeline**: - **`my_pipeline`**: Executes multiple `train_step` calls for a range of learning rates and then invokes `selection_step` to find the optimal model. **Code Example**: ```python from typing import Annotated from sklearn.base import ClassifierMixin from zenml import step, pipeline, get_step_context from zenml.client import Client model_output_name = "my_model" @step def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: return ... # Train model with learning rate @step def selection_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) trained_models_by_lr = {} for step_name, step_info in run.steps.items(): if step_name.startswith(step_prefix): model = step_info.outputs[output_name][0].load() lr = step_info.config.parameters["learning_rate"] trained_models_by_lr[lr] = model for lr, model in trained_models_by_lr.items(): ... # Evaluate models to find the best one @pipeline def my_pipeline(step_count: int) -> None: after = [] for i in range(step_count): train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") after.append(f"train_step_{i}") selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) my_pipeline(step_count=4) ``` **Important Notes**: - The current implementation requires querying artifacts from previous steps via the ZenML Client, as passing a variable number of artifacts programmatically is not supported. - Additional resources include example implementations for randomized hyperparameter search and selection of the best model based on defined metrics, available in the ZenML GitHub repository. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === ### Summary of Pipeline Run Naming in ZenML When a pipeline run is executed, it is automatically assigned a name based on the current date and time, as shown below: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. ``` To customize the run name, use the `run_name` parameter in the `with_options()` method: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name" ) training_pipeline() ``` Run names must be unique. To ensure uniqueness, dynamically compute the run name or include placeholders that ZenML will replace. Placeholders can be set in the `@pipeline` decorator or in the `pipeline.with_options` function. Standard placeholders available are: - `{date}`: current date (e.g., `2024_11_27`) - `{time}`: current time in UTC format (e.g., `11_07_09_326492`) Example of using placeholders: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name_{experiment_name}_{date}_{time}" ) training_pipeline() ``` This setup allows for organized and traceable pipeline runs. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/reference-environment-variables-in-configurations.md === # Reference Environment Variables in ZenML Configurations ZenML allows referencing environment variables in configurations using the placeholder syntax `${ENV_VARIABLE_NAME}`. ## In-code Example ```python from zenml import step @step(extra={"value_from_environment": "${ENV_VAR}"}) def my_step() -> None: ... ``` ## Configuration File Example ```yaml extra: value_from_environment: ${ENV_VAR} combined_value: prefix_${ENV_VAR}_suffix ``` This feature enhances flexibility in both code and configuration files. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/configuring-a-pipeline-at-runtime.md === ### Runtime Configuration of a Pipeline Run To configure a pipeline at runtime, use the `pipeline.with_options` method. There are two primary ways to do this: 1. **Explicit Configuration**: ```python pipeline.with_options(steps={"trainer": {"parameters": {"param1": 1}}}) ``` 2. **Using a YAML File**: ```python pipeline.with_options(config_file="path_to_yaml_file") ``` For more details on these options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/README.md). **Exception**: If triggering a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). For additional resources, see the documentation on [using configuration files](../../use-configuration-files/README.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md === # Accessing Secrets in ZenML Steps ZenML secrets consist of **key-value pairs** securely stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. To learn about configuring and creating secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). Secrets can be accessed within steps using the ZenML `Client` API, enabling secure API queries without hard-coded access keys. ## Example Code ```python from zenml import step from zenml.client import Client from somewhere import authenticate_to_some_api @step def secret_loader() -> None: """Load the example secret from the server.""" secret = Client().get_secret("") authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ## Additional Resources - [Create and manage secrets](../../interact-with-secrets.md) - [Secrets backend in ZenML](../../../getting-started/deploying-zenml/secret-management.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md === ### Summary of ZenML Parameterization and Caching Documentation **Overview**: Steps and pipelines in ZenML can be parameterized like standard Python functions. This allows for flexible configuration and behavior customization. #### Step Parameters - **Artifacts**: Outputs from other steps, used to share data within a pipeline. - **Parameters**: Explicitly provided values that are not dependent on other steps. Only JSON-serializable values (via Pydantic) can be passed as parameters. For non-serializable objects (e.g., NumPy arrays), use External Artifacts. **Example**: ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: pass @pipeline def my_pipeline(): int_artifact = some_other_step() my_step(input_1=int_artifact, input_2=42) ``` #### YAML Configuration Parameters can be defined in a YAML file, allowing for easy updates without modifying code. **YAML Example**: ```yaml parameters: environment: production steps: my_step: parameters: input_2: 42 ``` **Python Example**: ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: ... @pipeline def my_pipeline(environment: str): ... if __name__=="__main__": my_pipeline.with_options(config_paths="config.yaml")() ``` #### Conflict Handling Conflicts may arise when parameters are defined in both the YAML file and the code. ZenML will notify users of such conflicts. **Conflict Example**: ```yaml parameters: some_param: 24 steps: my_step: parameters: input_2: 42 ``` ```python @pipeline def my_pipeline(some_param: int): my_step(input_1=42, input_2=43) # Conflict with config ``` #### Caching Behavior - **Parameter Caching**: A step is cached only if all parameter values match previous executions. - **Artifact Caching**: A step is cached only if all input artifacts match previous executions. If upstream steps are not cached, the step will execute again. ### Additional Resources - [Use configuration files to set parameters](use-pipeline-step-parameters.md) - [How caching works and how to control it](control-caching-behavior.md) This summary encapsulates the key points regarding parameterization, configuration, conflict resolution, and caching in ZenML, ensuring clarity and conciseness. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-pipelines-asynchronously.md === ### Summary: Running Pipelines Asynchronously in ZenML By default, ZenML pipelines run synchronously, meaning the terminal displays logs in real-time as the pipeline executes. To enable asynchronous execution, you can configure the orchestrator in two ways: 1. **Pipeline Decorator**: Set `synchronous=False` in the pipeline decorator. ```python from zenml import pipeline @pipeline(settings={"orchestrator": {"synchronous": False}}) def my_pipeline(): ... ``` 2. **YAML Configuration**: Modify the orchestrator settings in a YAML config file. ```yaml settings: orchestrator.: synchronous: false ``` For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). This allows for background execution of pipeline runs, improving workflow efficiency. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/using-a-custom-step-invocation-id.md === ### Summary of Custom Step Invocation ID in ZenML When invoking a ZenML step in a pipeline, each step is assigned a unique **invocation ID**. This ID is essential for defining the execution order of steps and for fetching information about the invocation post-execution. #### Key Points: - The first invocation of a step uses the step name as the invocation ID (e.g., `my_step`). - Subsequent invocations append a suffix (e.g., `my_step_2`, `my_step_3`) to ensure uniqueness. - Custom invocation IDs can be specified by passing an `id` parameter, which must be unique across all invocations within the pipeline. #### Example Code: ```python from zenml import pipeline, step @step def my_step() -> None: ... @pipeline def example_pipeline(): my_step() # ID: my_step my_step() # ID: my_step_2 my_step(id="my_custom_invocation_id") # Custom ID ``` This structure allows for flexible step management and tracking within ZenML pipelines. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === ### Summary of ZenML Pipeline Composition Documentation **Overview**: ZenML enables the reuse of steps between pipelines to minimize code duplication by composing pipelines. **Key Points**: - **Pipeline Composition**: You can call one pipeline within another, allowing for shared functionality. - **Visibility**: Only the parent pipeline will be visible in the ZenML dashboard. **Example Code**: ```python from zenml import pipeline @pipeline def data_loading_pipeline(mode: str): data = training_data_loader_step() if mode == "train" else test_data_loader_step() return preprocessing_step(data) @pipeline def training_pipeline(): training_data = data_loading_pipeline(mode="train") model = training_step(data=training_data) evaluation_step(model=model, data=data_loading_pipeline(mode="test")) ``` **Additional Information**: For details on triggering a pipeline from another, refer to the [advanced usage documentation](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). **Learn More**: For more about orchestrators, see the [orchestrators guide](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md === ### Summary of ZenML Failure and Success Hooks Documentation **Overview**: ZenML provides hooks to execute actions after a step's execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: `on_failure` and `on_success`. #### Hook Definitions - **`on_failure`**: Triggered when a step fails. - **`on_success`**: Triggered when a step succeeds. **Example**: ```python from zenml import step def on_failure(exception: BaseException): print(f"Step failed: {str(exception)}") def on_success(): print("Step succeeded!") @step(on_failure=on_failure) def my_failing_step() -> int: raise ValueError("Error") @step(on_success=on_success) def my_successful_step() -> int: return 1 ``` #### Pipeline-Level Hooks Hooks can also be defined at the pipeline level to apply to all steps: ```python @pipeline(on_failure=on_failure, on_success=on_success) def my_pipeline(...): ... ``` **Note**: Step-level hooks take precedence over pipeline-level hooks. #### Accessing Step Information Use `get_step_context()` within hooks to access step and pipeline run information: ```python from zenml import get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) ``` #### Alerter Integration You can use the Alerter component to notify users of step outcomes: ```python from zenml import get_step_context, Client def on_failure(): step_name = get_step_context().step_run.name Client().active_stack.alerter.post(f"{step_name} just failed!") ``` **Standard Hooks**: ```python from zenml.hooks import alerter_success_hook, alerter_failure_hook @step(on_failure=alerter_failure_hook, on_success=alerter_success_hook) def my_step(...): ... ``` #### OpenAI ChatGPT Failure Hook This hook generates potential fixes for exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and your API key stored in a ZenML secret: ```shell zenml integration install openai zenml secret create openai --api_key= ``` Usage in a step: ```python from zenml.integration.openai.hooks import openai_chatgpt_alerter_failure_hook @step(on_failure=openai_chatgpt_alerter_failure_hook) def my_step(...): ... ``` ### Conclusion ZenML hooks enhance pipeline functionality by allowing post-execution actions, integrating with notification systems, and leveraging AI for error handling. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-an-individual-step.md === # Summary of ZenML Step Execution Documentation ## Running an Individual Step To execute a single step in ZenML, call the step as a normal Python function. ZenML will create an unlisted pipeline for this step, which won't be associated with any pipeline but can be viewed in the "Runs" tab of the dashboard. ### Example Code ```python from zenml import step import pandas as pd from sklearn.svm import SVC from sklearn.base import ClassifierMixin from typing import Tuple from typing_extensions import Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc # Prepare training data X_train = pd.DataFrame(...) y_train = pd.Series(...) # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` ## Running the Step Function Directly To bypass ZenML and run the step function directly, use the `entrypoint(...)` method: ### Example Code ```python X_train = pd.DataFrame(...) y_train = pd.Series(...) model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` ### Default Behavior To set the default behavior to run steps without ZenML, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will allow direct calls to the step function without involving the ZenML stack. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/delete-a-pipeline.md === # Summary of ZenML Pipeline Deletion Documentation ## Deleting a Pipeline To delete a pipeline, use either the CLI or the Python SDK: ### CLI Command ```shell zenml pipeline delete ``` ### Python SDK ```python from zenml.client import Client Client().delete_pipeline() ``` **Note:** Deleting a pipeline does not remove associated runs or artifacts. ### Deleting Multiple Pipelines For deleting multiple pipelines with the same prefix, use the following Python script: ```python from zenml.client import Client client = Client() pipelines_list = client.list_pipelines(name="startswith:test_pipeline", size=100) target_pipeline_ids = [p.id for p in pipelines_list.items] if input("Do you really want to delete these pipelines? (y/n): ").lower() == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) ``` ## Deleting a Pipeline Run To delete a pipeline run, use the CLI or the Python SDK: ### CLI Command ```shell zenml pipeline runs delete ``` ### Python SDK ```python from zenml.client import Client Client().delete_pipeline_run() ``` This documentation provides essential commands and scripts for managing the deletion of pipelines and their runs in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-execution-order-of-steps.md === # Control Execution Order of Steps in ZenML ZenML determines the execution order of pipeline steps based on data dependencies. For example, in the following pipeline, `step_3` depends on the outputs of `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1() step_2_output = step_2() step_3(step_1_output, step_2_output) ``` To enforce specific execution order constraints, you can use non-data dependencies by passing invocation IDs. For instance, to ensure `step_1` runs after `step_2`, use: ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1(after="step_2") step_2_output = step_2() step_3(step_1_output, step_2_output) ``` This modification ensures that `step_1` only starts after `step_2` has completed. For more details on using custom invocation IDs, refer to the [documentation](using-a-custom-step-invocation-id.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/schedule-a-pipeline.md === ### Summary of ZenML Scheduling Documentation This documentation covers how to set, pause, and stop schedules for pipelines in ZenML. Note that scheduling support varies by orchestrator. #### Supported Orchestrators - **Supported**: Airflow, AzureML, Databricks, HyperAI, Kubeflow, Kubernetes, Vertex. - **Not Supported**: Local, LocalDocker, Sagemaker, Skypilot (all variants), Tekton. #### Setting a Schedule You can set a schedule using cron expressions or human-readable notations. Here’s a concise code example: ```python from zenml.config.schedule import Schedule from zenml import pipeline from datetime import datetime @pipeline() def my_pipeline(...): ... # Using cron expression schedule = Schedule(cron_expression="5 14 * * 3") # Using human-readable notation schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) my_pipeline() ``` For more scheduling options, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule The method to pause or stop a scheduled run depends on the orchestrator. For instance, in Kubeflow, you can use the UI for this purpose. Users must consult their specific orchestrator's documentation for detailed steps. **Important Note**: ZenML only schedules runs; managing the lifecycle of these schedules is the user's responsibility. Running a pipeline with a schedule multiple times creates unique scheduled pipelines. #### Additional Resources For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/step-output-typing-and-annotation.md === ### Summary of ZenML Step Output Typing and Annotation #### Step Outputs - Outputs from steps are stored in an artifact store. Annotate and name them for clarity. #### Type Annotations - Type annotations are optional but beneficial: - **Type Validation**: Ensures correct input types from upstream steps. - **Serialization**: With annotations, ZenML selects the appropriate materializer for outputs. Custom materializers can be created if built-in options are inadequate. #### Materialization Warning - The built-in `CloudpickleMaterializer` can serialize any object but is not production-ready due to compatibility issues across Python versions. It may also pose security risks by allowing malicious file uploads. #### Code Examples ```python from typing import Tuple from zenml import step @step def square_root(number: int) -> float: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` - To enforce type annotations, set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True`. ZenML will raise exceptions for missing annotations. #### Tuple vs. Multiple Outputs - ZenML distinguishes between single output artifacts and multiple outputs based on the return statement: - A tuple literal (e.g., `return (1, 2)`) indicates multiple outputs. - Other cases are treated as a single output of type `Tuple`. #### Output Naming - Default output names: - Single output: `output` - Multiple outputs: `output_0`, `output_1`, etc. - Custom names can be set using the `Annotated` type annotation: ```python from typing_extensions import Annotated from typing import Tuple from zenml import step @step def square_root(number: int) -> Annotated[float, "custom_output_name"]: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[ Annotated[int, "quotient"], Annotated[int, "remainder"] ]: return a // b, a % b ``` - If no custom names are provided, artifacts are named using the format `{pipeline_name}::{step_name}::output`. #### Additional Resources - For more on output annotation, see [return-multiple-outputs-from-a-step.md](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md). - For custom data types, refer to [handle-custom-data-types.md](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === ### ZenML Caching Behavior By default, ZenML caches steps in pipelines when code and parameters remain unchanged. #### Caching Configuration - **At Step Level**: You can control caching behavior using the `@step` decorator: ```python @step(enable_cache=True) # Caches data loading def load_data(parameter: int) -> dict: ... @step(enable_cache=False) # Overrides caching for model training def train_model(data: dict) -> None: ... @pipeline(enable_cache=True) # Sets caching for the pipeline def simple_ml_pipeline(parameter: int): ... ``` - **Dynamic Configuration**: Caching settings can be modified after initial setup: ```python my_step.configure(enable_cache=...) my_pipeline.configure(enable_cache=...) ``` #### Important Notes - Caching occurs only when code and parameters are unchanged. - For YAML configuration, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). This summary provides essential details on caching behavior in ZenML, including configuration at both step and pipeline levels, and the ability to modify settings dynamically. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/README.md === ### Summary of ZenML Pipeline Documentation **Overview**: Building pipelines in ZenML is straightforward by using the `@step` and `@pipeline` decorators. **Code Example**: ```python from zenml import pipeline, step @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): train_model(load_data()) ``` **Execution**: Call the pipeline with: ```python simple_ml_pipeline() ``` **Logging**: The pipeline execution is logged in the ZenML dashboard, which requires a ZenML server (local or remote). **Dashboard Features**: Users can view the Directed Acyclic Graph (DAG) and associated metadata. **Advanced Features**: Additional functionalities include: - Configuring pipeline/step parameters - Naming and annotating step outputs - Controlling caching behavior - Customizing step invocation IDs - Naming pipeline runs - Using failure/success hooks - Hyperparameter tuning - Attaching and fetching metadata within steps and pipelines - Managing log storage - Accessing secrets in steps For more details on these advanced features, refer to the respective documentation links provided in the original text. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md === ### Configure the Server Environment The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). **Note:** This documentation is an older version. For the latest information, please visit the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md === # Handling Conflicting Dependencies in ZenML This documentation addresses common issues with conflicting dependencies when using ZenML alongside other libraries. ZenML is designed to be stack- and integration-agnostic, which can lead to dependency conflicts. ## Installing Dependencies Use the command `zenml integration install ...` to install dependencies for specific integrations. After installation, verify that all requirements are met by running `zenml integration list` and checking for a green tick symbol. ## Suggestions for Resolving Dependency Conflicts ### Use `pip-compile` Utilize `pip-compile` from the `pip-tools` package to create a static `requirements.txt` file for reproducibility across environments. For more details, refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management). ### Use `pip check` Run `pip check` to verify compatibility of your environment's dependencies. This will list any conflicts, which may affect your project. ### Known Dependency Issues ZenML has strict version requirements for some integrations. For example, it requires `click~=8.0.3` for its CLI, and using a version greater than 8.0.3 may lead to unexpected behaviors. ## Manual Dependency Installation You can manually install dependencies instead of using ZenML's integration installation, though this is not recommended. The `zenml integration install ...` command effectively runs a `pip install ...` for the specified integration dependencies. To export integration requirements, use: ```bash # Export to a file zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME # Print to console zenml integration export-requirements INTEGRATION_NAME ``` Adjust these requirements as needed. If using a remote orchestrator, update the `DockerSettings` object accordingly to ensure compatibility. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/README.md === # Summary of ZenML Environment Configuration ## Overview ZenML deployments involve multiple environments, including client, server, and execution environments, each with specific roles in managing dependencies and configurations. ### Client Environment (Runner Environment) - **Purpose**: Compiles ZenML pipelines, typically in a `run.py` script. - **Types**: - Local development - CI runner in production - ZenML Pro runner - Runner image orchestrated by ZenML server - **Key Steps**: 1. Compile pipeline using the `@pipeline` function. 2. Create/trigger pipeline and step build environments if running remotely. 3. Trigger a run in the orchestrator. - **Note**: The `@pipeline` function is called only in this environment, focusing on compile-time logic. ### ZenML Server Environment - **Function**: A FastAPI application managing pipelines and metadata, including the ZenML Dashboard. - **Dependency Management**: Install dependencies during ZenML deployment, primarily for custom integrations. ### Execution Environments - **Local Execution**: No distinct execution environment; client, server, and execution are the same. - **Remote Execution**: ZenML builds Docker images (execution environments) to transfer code to the remote orchestrator. - **Image Configuration**: Start with a base image containing ZenML and Python, then add pipeline dependencies. Refer to the guide on [containerizing your pipeline](../../../how-to/customize-docker-builds/README.md) for details. ### Image Builder Environment - **Default Behavior**: Execution environments are created locally using the local Docker client, requiring Docker installation. - **Image Builders**: ZenML provides image builders, a stack component for building and pushing Docker images in a specialized environment. If no image builder is configured, the local image builder is used for consistency. This summary encapsulates the essential details of configuring Python environments in ZenML, focusing on the roles and management of different environments involved in the deployment process. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md === ### Summary: Distributed Training with Hugging Face's Accelerate in ZenML ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing the use of multiple GPUs or nodes. #### Using Accelerate in ZenML Steps To enable distributed execution in training steps, use the `run_with_accelerate` decorator: ```python from zenml import step, pipeline from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True) @step def training_step(some_param: int, ...): ... @pipeline def training_pipeline(some_param: int, ...): training_step(some_param, ...) ``` The decorator accepts arguments similar to the `accelerate launch` CLI command. Common arguments include: - `num_processes`: Number of processes for training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. - `mixed_precision`: Set mixed precision mode ('no', 'fp16', 'bf16'). #### Important Usage Notes 1. Use `run_with_accelerate` directly on steps with the '@' syntax. 2. Only keyword arguments are supported for accelerated steps. 3. Misuse raises a `RuntimeError` with usage guidance. #### Environment Configuration To run steps with Accelerate, ensure your environment is properly configured: 1. **Specify a CUDA-enabled Parent Image**: Example using a CUDA-enabled PyTorch image: ```python from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Add Accelerate as a Requirement**: Ensure Accelerate is installed: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` #### Multi-GPU Training ZenML's Accelerate integration supports training on multiple GPUs, enhancing performance for large datasets or complex models. Key steps include: - Wrapping the training step with `run_with_accelerate`. - Configuring Accelerate arguments (e.g., `num_processes`, `multi_gpu`). - Ensuring compatibility of training code with distributed training. For assistance, connect with ZenML support via Slack. By leveraging Accelerate, ZenML users can efficiently scale training processes while maintaining the benefits of structured pipelines. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/README.md === # Summary of GPU Resource Management in ZenML ## Overview ZenML allows scaling machine learning pipelines to the cloud, leveraging GPU-backed hardware for enhanced performance. This involves specifying resource requirements and ensuring the container environment is properly configured. ## Specifying Resource Requirements To allocate resources for resource-intensive steps, use `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml import step @step(settings={"resources": ResourceSettings(cpu_count=8, gpu_count=2, memory="8GB")}) def training_step(...) -> ...: # train a model ``` For orchestrators like Skypilot that do not support `ResourceSettings`, use orchestrator-specific settings: ```python from zenml import step from zenml.integrations.skypilot.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings(cpus="2", memory="16", accelerators="V100:2") @step(settings={"orchestrator": skypilot_settings}) def training_step(...) -> ...: # train a model ``` ## CUDA-Enabled Container Configuration To utilize GPU capabilities, ensure the container is CUDA-enabled: 1. **Specify a CUDA-enabled parent image**: Example for PyTorch: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` For TensorFlow, use `tensorflow/tensorflow:latest-gpu`. 2. **Add ZenML as a pip requirement**: Example: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["zenml==0.39.1", "torchvision"] ) ``` ## Resetting CUDA Cache To avoid GPU cache issues, reset the CUDA cache between steps: ```python import gc import torch def cleanup_memory() -> None: while gc.collect(): torch.cuda.empty_cache() @step def training_step(...): cleanup_memory() # train a model ``` ## Multi-GPU Training ZenML supports training across multiple GPUs on a single node. To implement this: - Create a script/function for model training logic that runs in parallel across GPUs. - Call this function from within the ZenML step. For further assistance, connect with the ZenML community on Slack. ## Additional Resources - Refer to orchestrator documentation for specific resource support. - Ensure the orchestrator environment has permissions to pull necessary Docker images. This summary captures the essential technical details for managing GPU resources in ZenML, allowing for effective cloud-based machine learning pipeline execution. ================================================== === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === # Creating an External Integration for ZenML ## Overview ZenML aims to streamline the MLOps landscape by providing numerous integrations and allowing users to implement custom stack components. This guide is for those who want to contribute their integrations to the ZenML codebase. ## Steps to Create an Integration ### Step 1: Plan Your Integration Identify the categories your integration belongs to by referring to the ZenML component guide. Note that one integration can belong to multiple categories. ### Step 2: Create Stack Component Flavors Develop individual stack component flavors corresponding to the selected categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` Ensure ZenML is initialized at the root of your repository for proper resolution. ### Step 3: Create an Integration Class 1. **Clone the Repository**: Clone the main ZenML repository and set up your local environment. 2. **Create Integration Directory**: Create a new folder in `src/zenml/integrations/` for your integration, structured as follows: ``` /src/zenml/integrations/ / ├── artifact-stores/ ├── flavors/ └── __init__.py ``` 3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: ```python EXAMPLE_INTEGRATION = "" ``` 4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, subclass the `Integration` class: ```python from zenml.integrations.constants import from zenml.integrations.integration import Integration from zenml.stack import Flavor class ExampleIntegration(Integration): NAME = REQUIREMENTS = [""] @classmethod def flavors(cls) -> List[Type[Flavor]]: from zenml.integrations. import return [] ExampleIntegration.check_installation() ``` Refer to the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for an example. 5. **Import the Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. ### Step 4: Create a Pull Request Submit a PR to the ZenML repository and await review from core maintainers. ## Conclusion By following these steps, you can successfully create and contribute an integration to ZenML, enhancing its capabilities within the MLOps ecosystem. ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === # Contribute to ZenML Thank you for considering contributing to ZenML! We welcome contributions such as new features, documentation improvements, integrations, or bug reports. ## How to Contribute For detailed guidelines on contributing, including adding custom integrations, refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). This guide outlines best practices and conventions followed by ZenML. ================================================== === File: docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md === ### ZenML Server Upgrade Guide This documentation outlines how to upgrade your ZenML server based on different deployment methods. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). #### General Upgrade Best Practices - Upgrade promptly after a new version release to benefit from improvements and fixes. - Review the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. #### Upgrade via Docker 1. **Check Data Persistence**: Ensure data is stored on persistent storage or an external MySQL instance. Consider backing up data. 2. **Delete the Existing Container**: ```bash docker ps # Find your container ID docker stop # Stop the container docker rm # Remove the container ``` 3. **Deploy New Version**: ```bash docker run -it -d -p 8080:8080 --name zenmldocker/zenml-server: ``` Find available versions [here](https://hub.docker.com/r/zenmldocker/zenml-server/tags). #### Upgrade via Kubernetes with Helm 1. **Pull Latest Helm Chart**: ```bash git clone https://github.com/zenml-io/zenml.git git pull cd src/zenml/zen_server/deploy/helm/ ``` 2. **Reuse or Extract Values**: - Use your existing `custom-values.yaml`, or extract values: ```bash helm -n get values zenml-server > custom-values.yaml ``` 3. **Upgrade Release**: ```bash helm -n upgrade zenml-server . -f custom-values.yaml ``` > **Note**: Avoid changing the container image tag in the Helm chart unless necessary, as it may lead to compatibility issues. #### Important Considerations - **Downgrading**: Not supported and may cause unexpected behavior. - **Python Client Version**: Should match the server version for compatibility. This summary provides essential steps and considerations for upgrading your ZenML server across different deployment methods. ================================================== === File: docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md === # Best Practices for Upgrading ZenML ## Overview This document outlines best practices for upgrading your ZenML server and code to ensure a smooth transition. ## Upgrading Your Server ### Data Backups - **Database Backup**: Always back up your MySQL database before upgrading to allow rollback if needed. - **Automated Backups**: Set up daily automated backups using managed services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. ### Upgrade Strategies - **Staged Upgrade**: Use two ZenML server instances (old and new) to migrate services incrementally. - **Team Coordination**: Coordinate upgrade timing among teams to minimize disruption. - **Separate ZenML Servers**: For teams needing different upgrade schedules, consider using dedicated ZenML server instances. ### Minimizing Downtime - **Upgrade Timing**: Schedule upgrades during low-activity periods. - **Avoid Mid-Pipeline Upgrades**: Be cautious with upgrades that may interrupt long-running pipelines. ## Upgrading Your Code ### Testing and Compatibility - **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines to check compatibility. - **End-to-End Testing**: Develop simple end-to-end tests to ensure compatibility with your pipeline code. Utilize ZenML's [extensive test suite](https://github.com/zenml-io/zenml/tree/main/tests). - **Artifact Compatibility**: Be cautious with pickle-based materializers. Use version-agnostic methods for critical artifacts. Load older artifacts with: ```python from zenml.client import Client artifact = Client().get_artifact_version('YOUR_ARTIFACT_ID') loaded_artifact = artifact.load() ``` ### Dependency Management - **Python Version**: Ensure compatibility of your Python version with the ZenML version. Refer to the [installation guide](../../getting-started/installation.md). - **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version, as older versions may no longer be supported. ### Handling API Changes - **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for breaking changes and new syntax. - **Migration Scripts**: Use available [migration scripts](migration-guide/migration-guide.md) for database schema changes. By following these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server and code. Adapt these guidelines to your specific environment and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md === # Best Practices for Using ZenML Server in Production This guide outlines best practices for setting up a ZenML server in production environments, moving beyond initial testing setups. ## Autoscaling Replicas To handle larger, longer-running pipelines, enable autoscaling based on your deployment environment: ### Kubernetes with Helm Use the following configuration in your Helm chart: ```yaml autoscaling: enabled: true minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 80 ``` ### ECS (AWS) 1. Navigate to your service in the ECS console. 2. Click "Update Service." 3. Enable autoscaling and set min/max tasks. ### Cloud Run (GCP) 1. Go to the Cloud Run console. 2. Click "Edit & Deploy new Revision." 3. Set minimum and maximum instances in the "Revision auto-scaling" section. ### Docker Compose Scale your service using: ```bash docker compose up --scale zenml-server=N ``` ## High Connection Pool Values Increase server performance by adjusting thread pool size: ```yaml zenml: threadPoolSize: 100 ``` Set `ZENML_SERVER_THREAD_POOL_SIZE` for other deployments. Adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. ## Scaling the Backing Database Monitor your database for scaling needs based on: - **CPU Utilization:** Above 50% consistently indicates a need for scaling. - **Freeable Memory:** Below 100-200 MB may require scaling. ## Setting Up Ingress/Load Balancer Securely expose your ZenML server: ### Kubernetes with Helm Enable ingress: ```yaml zenml: ingress: enabled: true className: "nginx" ``` ### ECS Use Application Load Balancers as per [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html). ### Cloud Run Utilize Cloud Load Balancing following [GCP documentation](https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless). ### Docker Compose Set up an NGINX reverse proxy to route traffic. ## Monitoring Implement monitoring based on your deployment: ### Kubernetes with Helm Use Prometheus and Grafana. A sample query for CPU utilization: ``` sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) ``` ### ECS Utilize [CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html) for metrics like CPU and memory utilization. ### Cloud Run Use [Cloud Monitoring](https://cloud.google.com/run/docs/monitoring) for metrics visibility. ## Backups Establish a backup strategy to protect critical data: - Automated backups with a retention period (e.g., 30 days). - Periodic data exports to external storage (e.g., S3, GCS). - Manual backups before server upgrades. For further details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/how-to/manage-zenml-server/README.md === # Manage Your ZenML Server This section provides best practices for upgrading and using the ZenML server in production, along with troubleshooting tips. It includes recommended upgrade steps and migration guides for transitioning between specific versions. ### Key Points: - **Upgrading ZenML Server**: Follow the recommended procedures for a smooth upgrade. - **Production Use**: Guidelines for optimal performance and reliability in production environments. - **Troubleshooting**: Common issues and solutions to maintain server functionality. - **Migration Guides**: Detailed instructions for moving between ZenML versions. For visual reference, an image of the ZenML Scarf is included. ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === # Troubleshooting ZenML Deployment ## Viewing Logs To debug ZenML deployment issues, analyze logs based on your deployment method: ### Kubernetes 1. Check running pods: ```bash kubectl -n get pods ``` 2. If pods aren't running, get logs for all pods: ```bash kubectl -n logs -l app.kubernetes.io/name=zenml ``` 3. For specific container logs (use `zenml-db-init` for `Init` state errors): ```bash kubectl -n logs -l app.kubernetes.io/name=zenml -c ``` - Use `--tail` to limit lines or `--follow` for real-time logs. ### Docker - For `zenml login --local --docker`: ```shell zenml logs -f ``` - For `docker run`: ```shell docker logs zenml -f ``` - For `docker compose`: ```shell docker compose -p zenml logs -f ``` ## Fixing Database Connection Problems Common MySQL connection issues: - **Access Denied**: Check username/password. - **Can't Connect**: Verify host settings. Test connection: ```bash mysql -h -u -p ``` - For Kubernetes, use `kubectl port-forward` to connect to the database locally. ## Fixing Database Initialization Problems If migrating to an older ZenML version results in `Revision not found` errors: 1. Log in to MySQL: ```bash mysql -h -u -p ``` 2. Drop the existing database: ```sql drop database ; ``` 3. Create a new database: ```sql create database ; ``` 4. Restart the Kubernetes pods or Docker container to reinitialize the database. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md === # ZenML User Authentication Overview Authenticate clients with the ZenML Server using the ZenML CLI and web-based login via the command: ```bash zenml login https://... ``` This command initiates a browser-based validation process. You can choose to trust the device or not: - **Trust this device**: Issues a 30-day token. - **Do not trust**: Issues a 24-hour token. To view authorized devices, use: ```bash zenml authorized-device list ``` To inspect a specific device: ```bash zenml authorized-device describe ``` For added security, invalidate a token with: ```bash zenml authorized-device lock ``` ### Summary Steps: 1. Run `zenml login ` to connect to the ZenML server. 2. Decide on device trust. 3. List devices with `zenml devices list`. 4. Lock a device with `zenml device lock `. ### Important Notice: Using the ZenML CLI ensures secure interaction with your ZenML tenants. Regularly manage device trust levels and revoke access as necessary to protect data and infrastructure. Each token is a potential access point to sensitive information. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md === # ZenML Service Account Authentication To authenticate to a ZenML server in non-interactive environments (e.g., CI/CD, serverless functions), create a service account and use an API key for authentication. ## Creating a Service Account To create a service account and generate an API key, run: ```bash zenml service-account create ``` The API key will be displayed and cannot be retrieved later. ## Connecting with the API Key You can connect your ZenML client using one of the following methods: 1. **CLI Method**: ```bash zenml login https://... --api-key ``` 2. **Environment Variables** (suitable for CI/CD or containerized environments): ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` **Note**: No need to run `zenml login` after setting these variables. ## Managing Service Accounts and API Keys To list service accounts and their API keys: ```bash zenml service-account list zenml service-account api-key list ``` To inspect a specific service account or API key: ```bash zenml service-account describe zenml service-account api-key describe ``` ## API Key Rotation API keys do not expire, but it's recommended to rotate them regularly: ```bash zenml service-account api-key rotate ``` To retain the old API key for a specified time (e.g., 60 minutes): ```bash zenml service-account api-key rotate --retain 60 ``` ## Deactivating Service Accounts or API Keys To deactivate a service account or API key: ```bash zenml service-account update --active false zenml service-account api-key update --active false ``` Deactivation takes immediate effect. ## Summary of Steps 1. Create a service account: `zenml service-account create`. 2. Connect using API key: `zenml login --api-key`. 3. List service accounts: `zenml service-account list`. 4. List API keys: `zenml service-account api-key list`. 5. Rotate API keys: `zenml service-account api-key rotate`. 6. Deactivate accounts/keys: `zenml service-account update` or `zenml service-account api-key update`. ### Important Notice Regularly rotate API keys and deactivate/delete unused service accounts and API keys to secure access to your data and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md === # Connecting to ZenML Once ZenML is deployed, there are multiple methods to connect to it. For detailed deployment instructions, refer to the [production guide](../../../user-guide/production-guide/deploying-zenml.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === # ZenML Migration Guide: Version 0.13.2 to 0.20.0 **Last Updated:** 2023-07-24 ## Overview ZenML 0.20.0 introduces significant architectural changes that may not be backwards compatible. This guide outlines the necessary steps to migrate existing ZenML stacks and pipelines to the new version. ### Key Changes 1. **Metadata Store**: ZenML now manages its own Metadata Store, eliminating the need for separate implementations. Existing remote Metadata Stores must be replaced with a ZenML server deployment. 2. **ZenML Dashboard**: A new dashboard is included for managing ZenML deployments. 3. **Profiles Removed**: ZenML Profiles have been replaced with Projects. Existing Profiles must be manually migrated. 4. **Decoupled Configuration**: Stack component configurations are now separate from their implementations. Custom stack components may require updates. 5. **Collaboration Features**: The new server allows sharing of stacks and components among users. ## Migration Steps ### 1. Update ZenML To revert to the previous version if needed: ```bash pip install zenml==0.13.2 ``` ### 2. Migrate Pipeline Runs Use the `zenml pipeline runs migrate` command to transfer existing pipeline run data: - Backup your metadata stores before upgrading. - Decide on your ZenML deployment model. - Connect to your ZenML server if applicable. **Migration Commands**: - For SQLite: ```bash zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db ``` - For MySQL: ```bash zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD ``` ### 3. Deploy ZenML Server To deploy a local server: ```bash zenml up ``` To connect to an existing server: ```bash zenml connect ``` ### 4. Migrate Profiles Profiles are deprecated; migrate them to Projects: 1. Update ZenML to 0.20.0. 2. Connect to your ZenML server. 3. Use: ```bash zenml profile list zenml profile migrate PATH/TO/PROFILE ``` *Note: The Dashboard currently only shows the `default` Project.* ### 5. Configuration Changes - **Rename Classes**: - `Repository` → `Client` - `BaseStepConfig` → `BaseParameters` - **New Configuration Method**: Use the `settings` parameter in decorators: ```python @step(settings={"docker": DockerSettings(...)}) def my_step() -> None: ... ``` ### 6. Pipeline and Step Configuration - Remove deprecated decorators like `@enable_xxx`. - Use the new `BaseSettings` class for configurations. ### 7. Post-Execution Changes Update post-execution workflows: ```python from zenml.post_execution import get_pipelines, get_pipeline ``` ## Future Changes Upcoming changes may include moving the secrets manager out of the stack and potential deprecation of `StepContext`. ## Reporting Issues For bugs or feature requests, engage with the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). This guide provides a comprehensive overview of the migration process to ensure a smooth transition to ZenML 0.20.0. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === # ZenML Migration Guide ## Overview This guide outlines the migration process for ZenML code when upgrading to new versions, particularly when breaking changes are introduced. ### Versioning and Migration Types - **No Breaking Changes**: Upgrades like `0.40.2` to `0.40.3` require no migration. - **Minor Breaking Changes**: Upgrades such as `0.40.3` to `0.41.0` necessitate consideration of changes. - **Major Breaking Changes**: Upgrades from `0.39.1` to `0.40.0` involve significant changes in code structure or usage. ### Major Migration Guides Follow these guides sequentially if multiple migrations are needed: - [0.13.2 → 0.20.0](migration-zero-twenty.md) - [0.23.0 → 0.30.0](migration-zero-thirty.md) - [0.39.1 → 0.41.0](migration-zero-forty.md) - [0.58.2 → 0.60.0](migration-zero-sixty.md) ### Release Notes For minor breaking changes, refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for detailed information on changes introduced in each release. **Note**: This documentation is for an older version of ZenML. For the latest version, please visit [this up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === ### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2) **Overview**: ZenML has upgraded to Pydantic v2, introducing stricter validation and performance improvements. Users may encounter new validation errors due to these changes. #### Key Dependency Changes: - **SQLModel**: Upgraded from `0.0.8` to `0.0.18` for compatibility with Pydantic v2. - **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should review the [migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). #### Pydantic v2 Features: - Enhanced performance using Rust. - New features in model design, validation, and serialization. Refer to the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/) for details. #### Integration Updates: - **Airflow**: Removed dependencies due to incompatibility with SQLAlchemy v1. Use ZenML to create Airflow pipelines in a separate environment. - **AWS**: Updated `sagemaker` to version `2.172.0` to support `protobuf` 4. - **Evidently**: Updated to version `0.4.16` for Pydantic v2 compatibility. - **Feast**: Removed extra `redis` dependency for compatibility. - **GCP**: Upgraded `kfp` to v2, which no longer requires Pydantic. Expect functional changes in the vertex step operator. - **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 support. - **Kubeflow**: Similar to GCP, upgraded `kfp` to v2. - **MLflow**: Compatible with both Pydantic versions, but may downgrade to v1 if installed incorrectly. Watch for deprecation warnings. - **Label Studio**: Updated to support Pydantic v2. - **Skypilot**: Incompatibility with `azurecli` prevents installation of `skypilot[azure]`. Users should remain on the previous ZenML version. - **TensorFlow**: Requires `tensorflow>=2.12.0` due to dependency changes. Issues may arise with Python 3.8 on Ubuntu. - **Tekton**: Updated to use `kfp` v2 for compatibility. #### Recommendations: - Users may face dependency issues upon upgrading to ZenML 0.60.0, especially with integrations not supporting Pydantic v2. It is advisable to set up a fresh Python environment for a smoother transition. For more information, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md === ### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 **Important Notes:** - This documentation is for older ZenML versions. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). - Migrating to `0.30.0` involves non-reversible database changes; downgrading to `<=0.23.0` is not possible. If on an older version, follow the [0.20.0 Migration Guide](migration-zero-twenty.md) first. **Key Changes in ZenML 0.30.0:** - The `ml-pipelines-sdk` dependency has been removed. - Pipeline runs and artifacts are now stored directly in the ZenML database. **Migration Steps:** 1. Install ZenML 0.30.0: ```bash pip install zenml==0.30.0 zenml version # Should return 0.30.0 ``` 2. The database migration occurs automatically upon running any `zenml` CLI command after installation. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md === # ZenML Migration Guide: Version 0.39.1 to 0.41.0 ## Overview ZenML versions 0.40.0 and 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future releases. ## Migration Examples ### Old Syntax ```python from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step from zenml.pipelines import pipeline class MyStepParameters(BaseParameters): param_1: int param_2: Optional[float] = None @step def my_step(params: MyStepParameters, context: StepContext) -> Output(int_output=int, str_output=str): result = int(params.param_1 * (params.param_2 or 1)) result_uri = context.get_output_artifact_uri() return result, result_uri @pipeline def my_pipeline(my_step): my_step() step_instance = my_step(params=MyStepParameters(param_1=17)) pipeline_instance = my_pipeline(my_step=step_instance) pipeline_instance.run(schedule=Schedule(...)) ``` ### New Syntax ```python from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step @step def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: result = int(param_1 * (param_2 or 1)) result_uri = get_step_context().get_output_artifact_uri() return result, result_uri @pipeline def my_pipeline(): my_step(param_1=17) my_pipeline = my_pipeline.with_options(enable_cache=False, schedule=schedule) my_pipeline() ``` ## Key Changes ### Defining Steps - **Old:** Use `BaseParameters` to define parameters. - **New:** Define parameters directly in the step function or use `pydantic.BaseModel`. ### Running Steps - **Old:** Call `step.entrypoint()`. - **New:** Call the step directly. ### Defining Pipelines - **Old:** Steps are arguments of the pipeline function. - **New:** Steps are called directly within the pipeline function. ### Configuring Pipelines - **Old:** Use `pipeline_instance.configure(...)`. - **New:** Use `with_options(...)` method. ### Running Pipelines - **Old:** Create an instance and call `run(...)`. - **New:** Call the pipeline directly. ### Scheduling Pipelines - **Old:** Specify schedule in `run(...)`. - **New:** Use `with_options(...)` to set the schedule. ### Fetching Pipeline Execution Results - **Old:** Access runs via `get_runs()`. - **New:** Use `last_run` to access the most recent execution. ### Controlling Step Execution Order - **Old:** Use `step.after(...)`. - **New:** Pass `after` argument when calling a step. ### Defining Steps with Multiple Outputs - **Old:** Use `Output` class. - **New:** Use `Tuple` with optional custom output names. ### Accessing Run Information Inside Steps - **Old:** Pass `StepContext` as an argument. - **New:** Use `get_step_context()` to access run information. This guide provides a concise overview of the migration process from ZenML version 0.39.1 to 0.41.0, highlighting the key changes in syntax and functionality. For further details, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/configuring-zenml/configuring-zenml.md === # Configuring ZenML's Default Behavior This guide outlines how to configure ZenML's behavior in various situations. ### Key Points: - ZenML allows customization of its default settings. - Users can adapt ZenML to fit specific workflows and requirements. For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/popular-integrations/skypilot.md === ### Summary of Using SkyPilot with ZenML **Overview**: The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and high GPU availability. #### Prerequisites - Install ZenML SkyPilot integration for your cloud provider: ```bash zenml integration install skypilot_ ``` - Ensure Docker is running. - Set up a remote artifact store and container registry. - Have a remote ZenML deployment. - Obtain permissions for VM provisioning. - Configure a service connector for cloud authentication (not required for Lambda Labs). #### Configuration Steps **For AWS, GCP, Azure**: 1. Install SkyPilot integration and connectors. 2. Register a service connector with necessary credentials. 3. Register and connect the orchestrator to the service connector. 4. Register and activate a stack with the orchestrator. ```bash zenml service-connector register -skypilot-vm -t --auto-configure zenml orchestrator register --flavor vm_ zenml orchestrator connect --connector -skypilot-vm zenml stack register -o ... --set ``` **For Lambda Labs**: 1. Install the SkyPilot Lambda integration. 2. Register a secret with your API key. 3. Register the orchestrator using the API key secret. 4. Register and activate a stack. ```bash zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} zenml stack register -o ... --set ``` #### Running a Pipeline Once configured, run ZenML pipelines with the SkyPilot VM Orchestrator, where each step executes in a Docker container on a provisioned VM. #### Additional Configuration Customize the orchestrator with cloud-specific `Settings` objects: ```python from zenml.integrations.skypilot_.flavors.skypilot_orchestrator__vm_flavor import SkypilotOrchestratorSettings skypilot_settings = SkypilotOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region= ) @pipeline(settings={"orchestrator": skypilot_settings}) ``` Configure resources per step: ```python @step(settings={"orchestrator": SkypilotOrchestratorSettings(...)}) def resource_intensive_step(): ... ``` For more advanced options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === ### Summary: Deploying ZenML Pipelines on Kubernetes The ZenML Kubernetes Orchestrator enables the execution of ML pipelines on a Kubernetes cluster without requiring Kubernetes code. It serves as a simpler alternative to orchestrators like Airflow or Kubeflow. #### Prerequisites To use the Kubernetes Orchestrator, ensure you have: - ZenML `kubernetes` integration installed: `zenml integration install kubernetes` - Docker and `kubectl` installed - A remote artifact store and container registry in your ZenML stack - A deployed Kubernetes cluster - (Optional) A configured `kubectl` context for the cluster #### Deploying the Orchestrator You need a Kubernetes cluster to run the orchestrator. Various deployment methods exist, which can be explored in the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md). #### Configuring the Orchestrator Configuration can be done in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Using `kubectl` Context**: ```bash zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` #### Running a Pipeline To execute a ZenML pipeline with the Kubernetes Orchestrator, run: ```bash python your_pipeline.py ``` This command creates a Kubernetes pod for each pipeline step. Use `kubectl` commands to interact with the pods. For further details, refer to the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). ================================================== === File: docs/book/how-to/popular-integrations/gcp-guide.md === # Minimal GCP Stack Setup Guide This guide outlines the steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ## Steps to Set Up ### 1. Choose a GCP Project Select or create a Google Cloud project in the console. Ensure a billing account is attached. ```bash gcloud projects create --billing-project= ``` ### 2. Enable GCloud APIs Enable the following APIs in your GCP project: - Cloud Functions API - Cloud Run Admin API - Cloud Build API - Artifact Registry API - Cloud Logging API ### 3. Create a Dedicated Service Account Create a service account with the following roles: - AI Platform Service Agent - Storage Object Admin ### 4. Create a JSON Key for the Service Account Generate a JSON key for the service account: ```bash export JSON_KEY_FILE_PATH= ``` ### 5. Create a Service Connector in ZenML Authenticate ZenML with GCP using the service account: ```bash zenml integration install gcp \ && zenml service-connector register gcp_connector \ --type gcp \ --auth-method service-account \ --service_account_json=@${JSON_KEY_FILE_PATH} \ --project_id= ``` ### 6. Create Stack Components #### Artifact Store Create a GCS bucket and register it as an artifact store: ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp --path=gs:// zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` #### Orchestrator Register Vertex AI as the orchestrator: ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex --project= --location=europe-west2 zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` #### Container Registry Register a GCP container registry: ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri= zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` ### 7. Create Stack Register the stack with the created components: ```bash export STACK_NAME=gcp_stack zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ## Cleanup To remove all created resources, delete the project: ```bash gcloud project delete ``` ## Best Practices - **IAM and Least Privilege**: Grant minimal permissions for security. - **Resource Labeling**: Use labels for better cost tracking and organization. ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production ``` - **Cost Management**: Monitor spending using GCP’s Cost Management tools and set budget alerts. ```bash gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-name="ZenML Monthly Budget" --budget-amount=1000 --threshold-rule=percent=90 ``` - **Backup Strategy**: Regularly back up data and enable versioning on GCS buckets. ```bash gsutil versioning set on gs://your-bucket-name ``` By following these steps and best practices, you can efficiently set up and manage a GCP stack for ZenML projects. ================================================== === File: docs/book/how-to/popular-integrations/azure-guide.md === # Azure Stack Setup for ZenML Pipelines This guide outlines the steps to set up a minimal production stack on Azure for running ZenML pipelines. ## Prerequisites - Active Azure account - ZenML installed - ZenML Azure integration installed: ```bash zenml integration install azure ``` ## Steps to Set Up Azure Stack ### 1. Set Up Credentials Create a service principal in Azure: 1. Go to Azure Portal > App Registrations > `+ New registration`. 2. Register and note the Application ID and Tenant ID. 3. Under `Certificates & secrets`, create a client secret and note its value. ### 2. Create Resource Group and AzureML Instance 1. In the Azure Portal, navigate to `Resource Groups` and click `+ Create`. 2. After creating the resource group, click `+ Create` to add an Azure Machine Learning workspace. Optionally, create a container registry. ### 3. Create Role Assignments 1. In your resource group, go to `Access control (IAM)` > `+ Add role assignment`. 2. Assign the following roles to your registered app: - AzureML Compute Operator - AzureML Data Scientist - AzureML Registry User ### 4. Create a Service Connector Register a ZenML Azure Service Connector: ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ --client_secret= \ --tenant_id= \ --client_id= ``` ### 5. Create Stack Components #### Artifact Store (Azure Blob Storage) 1. Create a container in your AzureML workspace's storage account. 2. Register the artifact store: ```bash zenml artifact-store register azure_artifact_store -f azure \ --path= \ --connector azure_connector ``` #### Orchestrator (AzureML) Register the orchestrator: ```bash zenml orchestrator register azure_orchestrator -f azureml \ --subscription_id= \ --resource_group= \ --workspace= \ --connector azure_connector ``` #### Container Registry (Azure Container Registry) Register the container registry: ```bash zenml container-registry register azure_container_registry -f azure \ --uri= \ --connector azure_connector ``` ### 6. Create a Stack Register the Azure ZenML stack: ```bash zenml stack register azure_stack \ -o azure_orchestrator \ -a azure_artifact_store \ -c azure_container_registry \ --set ``` ### 7. Run a Pipeline Define and run a simple ZenML pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from Azure!" @pipeline def azure_pipeline(): hello_world() if __name__ == "__main__": azure_pipeline() ``` Save as `run.py` and execute: ```bash python run.py ``` ### Next Steps - Explore ZenML's production guide for best practices. - Investigate ZenML integrations with other tools. - Join the ZenML community for support. For the latest documentation, visit [ZenML Docs](https://docs.zenml.io). ================================================== === File: docs/book/how-to/popular-integrations/kubeflow.md === # Kubeflow Orchestrator with ZenML The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without needing to write Kubeflow code. ## Prerequisites To use the Kubeflow Orchestrator, ensure you have: - ZenML `kubeflow` integration installed: `zenml integration install kubeflow` - Docker installed and running - (Optional) `kubectl` installed - A Kubernetes cluster with Kubeflow Pipelines - A remote artifact store and container registry in your ZenML stack - A remote ZenML server deployed in the cloud - (Optional) Kubernetes context name for the remote cluster ## Configuring the Orchestrator You can configure the orchestrator in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubeflow zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack update -o ``` 2. **Using `kubectl`** with a context: ```bash zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack update -o ``` ## Running a Pipeline To run a ZenML pipeline: ```python python your_pipeline.py ``` This command creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. ## Additional Configuration Further configure the orchestrator with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={ "affinity": {...}, "tolerations": [...] } ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` ## Multi-Tenancy Deployments For multi-tenant setups, specify the `kubeflow_hostname`: ```bash zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Provide namespace, username, and password in the settings: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="admin", client_password="abc123", user_namespace="namespace_name" ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` For more details, refer to the [full Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). ================================================== === File: docs/book/how-to/popular-integrations/aws-guide.md === # AWS Stack Setup for ZenML Pipelines This guide outlines the steps to create a minimal AWS stack for running ZenML pipelines. It includes setting up IAM roles, service connectors, and stack components. ## Prerequisites - Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. - AWS CLI configured with your credentials. ## Steps ### 1. Set Up Credentials and Local Environment 1. **Choose AWS Region**: Select a region (e.g., `us-east-1`). 2. **Create IAM Role**: - Get your AWS account ID: ```shell aws sts get-caller-identity --query Account --output text ``` - Create `assume-role-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam:::root", "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` - Create the IAM role: ```shell aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json ``` - Attach necessary policies: ```shell aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` 3. **Install ZenML Integrations**: ```shell zenml integration install aws s3 -y ``` ### 2. Create a Service Connector Register an AWS Service Connector in ZenML: ```shell zenml service-connector register aws_connector \ --type aws \ --auth-method iam-role \ --role_arn= \ --region= \ --aws_access_key_id= \ --aws_secret_access_key= ``` ### 3. Create Stack Components #### Artifact Store (S3) 1. Create an S3 bucket: ```shell aws s3api create-bucket --bucket your-bucket-name ``` 2. Register the S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://your-bucket-name --connector aws_connector ``` #### Orchestrator (SageMaker Pipelines) 1. Create a SageMaker domain (follow AWS documentation). 2. Register the SageMaker orchestrator: ```shell zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region= --execution_role= ``` #### Container Registry (ECR) 1. Create an ECR repository: ```shell aws ecr create-repository --repository-name zenml --region ``` 2. Register the ECR container registry: ```shell zenml container-registry register ecr-registry --flavor=aws --uri=.dkr.ecr..amazonaws.com --connector aws_connector ``` ### 4. Create the Stack ```shell export STACK_NAME=aws_stack zenml stack register ${STACK_NAME} -o sagemaker-orchestrator \ -a cloud_artifact_store -c ecr-registry --set ``` ### 5. Run a Pipeline Define and run a ZenML pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from SageMaker!" @pipeline def aws_sagemaker_pipeline(): hello_world() if __name__ == "__main__": aws_sagemaker_pipeline() ``` Run the pipeline: ```shell python run.py ``` ## Cleanup To avoid charges, delete resources: ```shell # Delete S3 bucket aws s3 rm s3://your-bucket-name --recursive aws s3api delete-bucket --bucket your-bucket-name # Delete SageMaker domain aws sagemaker delete-domain --domain-id # Delete ECR repository aws ecr delete-repository --repository-name zenml --force # Detach policies and delete IAM role aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess aws iam delete-role --role-name zenml-role ``` ## Conclusion This guide provides a streamlined process to set up an AWS stack for ZenML, enabling scalable and efficient machine learning pipeline management. Key steps include IAM role creation, service connector registration, and stack component configuration. For best practices, consider IAM role management, resource tagging, cost management, and backup strategies. ================================================== === File: docs/book/how-to/popular-integrations/mlflow.md === # MLflow Experiment Tracker with ZenML ## Overview The MLflow Experiment Tracker integration in ZenML allows logging and visualization of pipeline step information using MLflow without additional code. ## Prerequisites - Install ZenML MLflow integration: ```bash zenml integration install mlflow -y ``` - Set up an MLflow deployment (local or remote). ## Configuring the Experiment Tracker ### Deployment Scenarios 1. **Local Deployment**: Uses a local artifact store. No extra configuration needed. ```bash zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` 2. **Remote Deployment**: Requires authentication. Recommended to use ZenML secrets. Create a secret: ```bash zenml secret create mlflow_secret --username= --password= ``` Register the experiment tracker: ```bash zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` ## Using the Experiment Tracker To log information in a pipeline step: 1. Enable the experiment tracker with `@step` decorator. 2. Use MLflow's logging capabilities. ```python import mlflow @step(experiment_tracker="") def train_step(...): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) ``` ## Viewing Results Retrieve the MLflow experiment URL for a ZenML run: ```python last_run = client.get_pipeline("").last_run trainer_step = last_run.get_step("") tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value ``` ## Additional Configuration Configure the experiment tracker using `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step( experiment_tracker="", settings={"experiment_tracker": mlflow_settings} ) ``` For more details, refer to the [full MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). ================================================== === File: docs/book/how-to/popular-integrations/README.md === # ZenML Integrations Guide ZenML integrates with popular tools in the data science and machine learning ecosystem. This guide provides instructions for seamless integration. ## Key Points - ZenML is designed for compatibility with various tools. - The integration process is straightforward and user-friendly. For detailed integration steps, refer to the specific tool documentation. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-python.md === ### ZenML Template Creation and Execution **Overview**: This documentation outlines how to create and run templates using the ZenML Python SDK. Note that this feature is exclusive to ZenML Pro users. #### Create a Template 1. **Using an Existing Pipeline Run**: ```python from zenml.client import Client run = Client().get_pipeline_run() Client().create_run_template( name=, deployment_id=run.deployment_id ) ``` - Ensure the pipeline run was executed on a **remote stack**. 2. **From Pipeline Definition**: ```python from zenml import pipeline @pipeline def my_pipeline(): ... template = my_pipeline.create_run_template(name=) ``` #### Run a Template To execute a created template: ```python from zenml.client import Client template = Client().get_run_template() config = template.config_template # [OPTIONAL] Modify the config here Client().trigger_pipeline( template_id=template.id, run_configuration=config, ) ``` - The new run will execute on the same stack as the original. #### Advanced Usage: Run a Template from Another Pipeline You can trigger one pipeline from another: ```python import pandas as pd from zenml import pipeline, step from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml.artifacts.utils import load_artifact from zenml.client import Client from zenml.config.pipeline_run_configuration import PipelineRunConfiguration @step def trainer(data_artifact_id: str): df = load_artifact(data_artifact_id) @pipeline def training_pipeline(): trainer() @step def load_data() -> pd.DataFrame: ... @step def trigger_pipeline(df: UnmaterializedArtifact): run_config = PipelineRunConfiguration( steps={"trainer": {"parameters": {"data_artifact_id": df.id}}} ) Client().trigger_pipeline("training_pipeline", run_configuration=run_config) @pipeline def loads_data_and_triggers_training(): df = load_data() trigger_pipeline(df) # Triggers the training pipeline ``` **Additional Resources**: - For more details on `PipelineRunConfiguration` and `trigger_pipeline`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client). - Learn about Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-cli.md === ### Create a Template Using the ZenML CLI **Note:** This is an older version of the ZenML documentation. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). **Feature Availability:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. #### Command to Create a Run Template Use the following command to create a run template with the ZenML CLI: ```bash zenml pipeline create-run-template --name= ``` - ``: Use `run.my_pipeline` if your pipeline is named `my_pipeline` in `run.py`. **Important:** Ensure you have an active **remote stack** when executing this command, or specify one using the `--stack` option. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === ### ZenML Dashboard Template Management **Overview**: This documentation describes how to create and run templates in the ZenML Dashboard. Note that this feature is exclusive to ZenML Pro users. #### Creating a Template 1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). 2. Click on `+ New Template`, provide a name, and click `Create`. #### Running a Template - To run a template: - Click `Run a Pipeline` on the main `Pipelines` page, or - Go to a specific template page and select `Run Template`. You will be directed to the `Run Details` page, where you can: - Upload a `.yaml` configuration file or - Modify configurations using the editor. After initiating the run, it will execute on the same stack as the original run. For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/trigger-pipelines/README.md === ### Triggering a Pipeline in ZenML In ZenML, pipelines can be triggered in various ways. The simplest method is to use a pipeline function directly: ```python from zenml import step, pipeline @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: print(f"Trained model using {len(data['features'])} data points.") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) if __name__ == "__main__": simple_ml_pipeline() ``` ### Run Templates **Run Templates** are pre-defined, parameterized configurations for ZenML pipelines, allowing easy execution from the ZenML dashboard or via the Client/REST API. They serve as customizable blueprints for pipeline runs. This feature is exclusive to ZenML Pro users. For more details on using templates, refer to the following resources: - [Use templates: Python SDK](use-templates-python.md) - [Use templates: CLI](use-templates-cli.md) - [Use templates: Dashboard](use-templates-dashboard.md) - [Use templates: REST API](use-templates-rest-api.md) ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md === ### ZenML REST API: Running a Template **Note:** This documentation refers to an older version of ZenML. For the latest version, visit [ZenML Documentation](https://docs.zenml.io). This feature is available only for ZenML Pro users; sign up [here](https://cloud.zenml.io). #### Prerequisites To trigger a pipeline via the REST API, you must have at least one run template created for that pipeline and know the pipeline name. #### Steps to Trigger a Pipeline 1. **Get Pipeline ID** - Call the endpoint to retrieve the pipeline ID: ```shell curl -X 'GET' \ '/api/v1/pipelines?name=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` 2. **Get Template ID** - Use the pipeline ID to fetch available run templates: ```shell curl -X 'GET' \ '/api/v1/run_templates?pipeline_id=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` 3. **Trigger the Pipeline** - Use the template ID to run the pipeline with a specified configuration: ```shell curl -X 'POST' \ '/api/v1/run_templates//runs' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} }' ``` A successful response indicates that the pipeline has been re-triggered with the new configuration. #### Additional Information For details on obtaining a bearer token for API access, refer to the [API Reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). ================================================== === File: docs/book/how-to/infrastructure-deployment/README.md === # Infrastructure and Deployment Summary This section details the infrastructure setup and deployment processes in ZenML. Key components include: 1. **Infrastructure Requirements**: ZenML can be deployed on various cloud providers (AWS, GCP, Azure) or on-premises. Ensure compatibility with Kubernetes for orchestration. 2. **Deployment Options**: - **Managed Services**: Utilize cloud-native services for ease of setup and maintenance. - **Self-Managed**: Deploy on your own Kubernetes cluster for greater control. 3. **Installation**: - Use `pip` to install ZenML: ```bash pip install zenml ``` 4. **Configuration**: - Configure your ZenML environment with: ```bash zenml init ``` - Set up the backend (e.g., MLflow, S3) for artifact storage and tracking. 5. **Pipeline Deployment**: - Define pipelines using decorators and run them with: ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): # pipeline steps here my_pipeline.run() ``` 6. **Monitoring and Logging**: Integrate with monitoring tools (e.g., Prometheus) for performance tracking and logging. 7. **Security**: Implement role-based access control (RBAC) and secure data handling practices. This summary encapsulates the essential aspects of infrastructure and deployment in ZenML, ensuring that critical information is retained for effective understanding and application. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/terraform-stack-management.md === # Summary: Registering Existing Infrastructure with ZenML for Terraform Users ## Overview This guide details how to integrate ZenML with existing Terraform-managed infrastructure, specifically for advanced users managing custom Terraform code. It emphasizes a two-phase approach: Infrastructure Deployment and ZenML Registration. ## Two-Phase Approach 1. **Infrastructure Deployment**: Create cloud resources (handled by platform teams). 2. **ZenML Registration**: Register these resources as ZenML stack components. ## Phase 1: Infrastructure Deployment You may already have existing Terraform configurations for your infrastructure, such as: ```hcl resource "google_storage_bucket" "ml_artifacts" { name = "company-ml-artifacts" location = "US" } resource "google_artifact_registry_repository" "ml_containers" { repository_id = "ml-containers" format = "DOCKER" } ``` ## Phase 2: ZenML Registration ### Setup the ZenML Provider Configure the ZenML provider to connect with your ZenML server: ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } } } provider "zenml" { # Configuration options loaded from environment variables } ``` Generate an API key with: ```bash zenml service-account create ``` ### Create Service Connectors Service connectors manage authentication for ZenML components: ```hcl resource "zenml_service_connector" "gcp_connector" { name = "gcp-${var.environment}-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id service_account_json = file("service-account.json") } } resource "zenml_stack_component" "artifact_store" { name = "existing-artifact-store" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } connector_id = zenml_service_connector.gcp_connector.id } ``` ### Register Stack Components Register components using a generic pattern: ```hcl locals { component_configs = { artifact_store = { type = "artifact_store", flavor = "gcp", configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } container_registry = { type = "container_registry", flavor = "gcp", configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } orchestrator = { type = "orchestrator", flavor = "vertex", configuration = { project = var.project_id, region = var.region } } } } resource "zenml_stack_component" "components" { for_each = local.component_configs name = "existing-${each.key}" type = each.value.type flavor = each.value.flavor configuration = each.value.configuration connector_id = zenml_service_connector.gcp_connector.id } ``` ### Assemble the Stack Combine components into a ZenML stack: ```hcl resource "zenml_stack" "ml_stack" { name = "${var.environment}-ml-stack" components = { for k, v in zenml_stack_component.components : k => v.id } } ``` ## Practical Walkthrough: Registering Existing GCP Infrastructure ### Prerequisites - GCS bucket for artifacts - Artifact Registry repository - Service account for ML operations - Vertex AI enabled for orchestration ### Configuration Steps 1. **Variables Configuration**: Define variables in `variables.tf`. 2. **Main Configuration**: Set up providers and resources in `main.tf`. 3. **Outputs Configuration**: Specify outputs in `outputs.tf`. 4. **terraform.tfvars Configuration**: Create a `terraform.tfvars` file for sensitive variables. ### Usage Instructions 1. Initialize Terraform: ```bash terraform init ``` 2. Install ZenML integrations: ```bash zenml integration install gcp ``` 3. Review planned changes: ```bash terraform plan ``` 4. Apply configuration: ```bash terraform apply ``` 5. Set the stack as active: ```bash zenml stack set $(terraform output -raw stack_name) ``` 6. Verify configuration: ```bash zenml stack describe ``` ## Best Practices - Use appropriate IAM roles and permissions. - Follow security practices for handling credentials. - Consider using Terraform workspaces for multiple environments. - Regularly back up Terraform state files. - Version control Terraform configurations, excluding sensitive files. For more information, refer to the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/best-practices.md === # Summary: Best Practices for Using IaC with ZenML ## Overview This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform, focusing on component-based architecture, environment management, resource isolation, and advanced stack management. ## Key Challenges - Supporting multiple ML teams with varying requirements. - Operating across different environments (dev, staging, prod). - Ensuring security and compliance. - Facilitating rapid iteration without infrastructure bottlenecks. ## ZenML Approach ZenML utilizes stack components as abstractions over infrastructure resources, allowing for consistency and reusability. ### Part 1: Stack Component Architecture **Problem:** Different teams require varied infrastructure configurations. **Solution:** Implement a component-based architecture by creating reusable modules. Example Terraform code for base infrastructure: ```hcl # modules/zenml_stack_base/main.tf terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } resource "random_id" "suffix" { byte_length = 6 } module "base_infrastructure" { source = "./modules/base_infra" environment = var.environment project_id = var.project_id region = var.region resource_prefix = "zenml-${var.environment}-${random_id.suffix.hex}" } resource "zenml_service_connector" "base_connector" { name = "${var.environment}-base-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = module.base_infrastructure.service_account_key } } resource "zenml_stack_component" "artifact_store" { name = "${var.environment}-artifact-store" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${module.base_infrastructure.artifact_store_bucket}/artifacts" } connector_id = zenml_service_connector.base_connector.id } resource "zenml_stack" "base_stack" { name = "${var.environment}-base-stack" components = { artifact_store = zenml_stack_component.artifact_store.id } } ``` Teams can extend the base stack with specific components. ### Part 2: Environment Management and Authentication **Problem:** Different environments require tailored authentication and configurations. **Solution:** Use environment-specific configurations with adaptable service connectors: ```hcl locals { env_config = { dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account", auth_configuration = { service_account_json = file("dev-sa.json") } } prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account", auth_configuration = { external_account_json = file("prod-sa.json") } } } } resource "zenml_service_connector" "env_connector" { name = "${var.environment}-connector" type = "gcp" auth_method = local.env_config[var.environment].auth_method dynamic "configuration" { for_each = try(local.env_config[var.environment].auth_configuration, {}) content { key = configuration.key; value = configuration.value } } } ``` ### Part 3: Resource Sharing and Isolation **Problem:** Need for strict isolation of resources across ML projects. **Solution:** Implement resource scoping with project isolation: ```hcl locals { project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}", recommendation = "projects/recommendation/${var.environment}" } } resource "zenml_stack_component" "project_artifact_stores" { for_each = local.project_paths name = "${each.key}-artifact-store" type = "artifact_store" configuration = { path = "gs://${var.shared_bucket}/${each.value}" } connector_id = zenml_service_connector.env_connector.id } resource "zenml_stack" "project_stacks" { for_each = local.project_paths name = "${each.key}-stack" components = { artifact_store = zenml_stack_component.project_artifact_stores[each.key].id } } ``` ### Part 4: Advanced Stack Management Practices 1. **Stack Component Versioning:** ```hcl locals { stack_version = "1.2.0" } resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } ``` 2. **Service Connector Management:** ```hcl resource "zenml_service_connector" "env_connector" { name = "${var.environment}-${var.purpose}-connector" auth_method = var.environment == "prod" ? "workload-identity" : "service-account" } ``` 3. **Component Configuration Management:** ```hcl locals { base_configs = { orchestrator = { location = var.region, project = var.project_id } } } resource "zenml_stack_component" "configured_component" { name = "${var.environment}-${var.component_type}" configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) } ``` 4. **Stack Organization and Dependencies:** ```hcl module "ml_stack" { source = "./modules/ml_stack" depends_on = [module.base_infrastructure] } ``` 5. **State Management:** ```hcl terraform { backend "gcs" { prefix = "terraform/state" } } ``` ## Conclusion Utilizing ZenML and Terraform for ML infrastructure enables the creation of a flexible, maintainable, and secure environment. Following these best practices ensures a clean infrastructure codebase and aligns ML operations with infrastructure management. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/README.md === ### Integrate with Infrastructure as Code **Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section details how to integrate ZenML with popular IaC tools like **Terraform**. For more information on IaC, visit [AWS IaC Overview](https://aws.amazon.com/what-is/iac). ![ZenML stack on Terraform Registry](../../../.gitbook/assets/terraform_providers_screenshot.png) ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === ### Summary of Azure Service Connector Documentation for ZenML The **Azure Service Connector** in ZenML enables authentication and access to various Azure resources, including Blob storage, AKS Kubernetes clusters, and ACR container registries. It supports automatic credential configuration via the Azure CLI and specialized authentication for different Azure services. #### Key Features: - **Resource Types**: - **Generic Azure Resource**: Connects to any Azure service using generic credentials. - **Azure Blob Storage**: Requires specific IAM permissions (e.g., `Storage Blob Data Contributor`). Supports URIs in formats like `az://{container-name}`. - **AKS Kubernetes Cluster**: Requires permissions to list and fetch AKS credentials. Identified by resource group and cluster name. - **ACR Container Registry**: Requires permissions to pull/push images. Identified by registry URI or name. #### Authentication Methods: 1. **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Requires enabling via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. 2. **Service Principal**: Involves client ID and secret for secure access. Recommended for production use. 3. **Access Token**: Temporary tokens that require regular updates. Not suitable for Azure Blob storage. #### Configuration Commands: - **List Connector Types**: ```shell zenml service-connector list-types --type azure ``` - **Register Service Connector**: - Implicit: ```shell zenml service-connector register azure-implicit --type azure --auth-method implicit --auto-configure ``` - Service Principal: ```shell zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` - **Describe Service Connector**: ```shell zenml service-connector describe ``` #### Local Client Provisioning: - The local Azure CLI, Kubernetes CLI, and Docker CLI can be configured with credentials from the Azure Service Connector. - Example for Kubernetes CLI: ```shell zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= ``` #### Stack Components Usage: - Connect Azure Blob Storage, AKS, and ACR to ZenML Stack Components using the Azure Service Connector. - Example of registering and connecting components: ```shell zenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore zenml orchestrator register aks-demo-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads zenml container-registry register acr-demo-registry --flavor azure --uri= ``` #### End-to-End Example: 1. Set up an Azure service principal with necessary permissions. 2. Register a multi-type Azure Service Connector. 3. Connect Azure Blob Storage, AKS, and ACR to ZenML Stack Components. 4. Run a simple pipeline to validate the setup. This documentation provides a comprehensive guide to configuring and using the Azure Service Connector with ZenML, ensuring secure and efficient access to Azure resources. For the latest updates, refer to the [ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/service-connectors-guide.md === # ZenML Service Connectors Guide Summary This documentation provides a comprehensive guide to managing Service Connectors in ZenML, enabling connections to external resources. Key sections include terminology, types of Service Connectors, registration, and connecting Stack Components. ## Key Sections 1. **Terminology**: - **Service Connector Types**: Define specific implementations for connecting to resources, detailing capabilities and authentication methods. - **Resource Types**: Classify resources based on access protocols or vendors (e.g., `kubernetes-cluster`, `docker-registry`). - **Resource Names**: Unique identifiers for resource instances (e.g., S3 bucket names). 2. **Service Connector Types**: - Various built-in types (e.g., AWS, GCP, Kubernetes) support multiple authentication methods and resource types. - Commands to explore types: ```sh zenml service-connector list-types zenml service-connector describe-type ``` 3. **Registering Service Connectors**: - Connectors can be **multi-type** (access multiple resource types) or **single-instance** (access one resource). - Example command to register a multi-type AWS Service Connector: ```sh zenml service-connector register aws-multi-type --type aws --auto-configure ``` 4. **Connecting Stack Components**: - Use Service Connectors to link Stack Components to external resources. - Example command to connect an artifact store: ```sh zenml artifact-store connect --connector ``` 5. **Verification**: - Verify the configuration and credentials of Service Connectors to ensure access to resources. - Example command for verification: ```sh zenml service-connector verify ``` 6. **Local Client Configuration**: - Configure local CLI tools (e.g., `kubectl`, Docker) using credentials from Service Connectors. - Example command to configure `kubectl`: ```sh zenml service-connector login --resource-type kubernetes-cluster --resource-id ``` 7. **Resource Discovery**: - Discover accessible resources via Service Connectors using: ```sh zenml service-connector list-resources ``` 8. **End-to-End Examples**: - Detailed examples for AWS, GCP, and Azure Service Connectors are available for practical guidance. ## Important Commands - List Service Connector Types: ```sh zenml service-connector list-types ``` - Register a Service Connector: ```sh zenml service-connector register --type --auto-configure ``` - Verify a Service Connector: ```sh zenml service-connector verify ``` - Connect Stack Components: ```sh zenml artifact-store connect --connector ``` This guide is essential for efficiently managing connections between ZenML and external resources, ensuring secure and effective integration within machine learning workflows. For the latest documentation, refer to [ZenML Docs](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/kubernetes-service-connector.md === ### Kubernetes Service Connector Overview The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to generic clusters via pre-authenticated Kubernetes Python clients. It also facilitates local `kubectl` configuration. #### Prerequisites - Install the Kubernetes Service Connector: - For only the connector: ```shell pip install "zenml[connectors-kubernetes]" ``` - For the entire Kubernetes integration: ```shell zenml integration install kubernetes ``` - Local `kubectl` configuration is not required for accessing clusters. #### Listing Service Connector Types To list available service connector types: ```shell zenml service-connector list-types --type kubernetes ``` #### Resource Types - Supports generic Kubernetes clusters identified by the `kubernetes-cluster` resource type. #### Authentication Methods 1. Username and password (not recommended for production). 2. Authentication token (can be empty for local K3D clusters). **Warning**: The Service Connector does not generate short-lived credentials; use API tokens with client certificates when possible. #### Auto-configuration Fetch credentials from the local `kubectl` during registration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure ``` #### Describing a Service Connector To describe a registered service connector: ```sh zenml service-connector describe kube-auto ``` #### Local Client Provisioning Configure the local Kubernetes client with: ```sh zenml service-connector login kube-auto ``` #### Stack Components Usage The Kubernetes Service Connector can be utilized in Orchestrator and Model Deployer stack components, simplifying the management of Kubernetes workloads without explicit `kubectl` configurations. **Note**: Credentials discovered through the Service Connector may have limited lifetimes, particularly with third-party authentication providers. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md === ### HyperAI Service Connector Documentation Summary The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. #### Command to List Connector Types ```shell $ zenml service-connector list-types --type hyperai ``` #### Connector Overview | Name | Type | Resource Types | Auth Methods | Local | Remote | |--------------------------|-----------|----------------------|-------------------|-------|--------| | HyperAI Service Connector | 🤖 hyperai | 🤖 hyperai-instance | rsa-key | ✅ | ✅ | | | | | dsa-key | | | | | | | ecdsa-key | | | | | | | ed25519-key | | | #### Prerequisites To use the HyperAI Service Connector, install the integration: ```shell zenml integration install hyperai ``` #### Resource Types The connector supports HyperAI instances. #### Authentication Methods SSH connections to HyperAI instances are established using: 1. RSA key 2. DSA key 3. ECDSA key 4. ED25519 key **Note:** SSH private keys are long-lived credentials granting unrestricted access to HyperAI instances. They will be distributed to all clients running pipelines. #### Configuration Requirements When configuring the Service Connector, provide: - At least one `hostname` - `username` for login - Optionally, an `ssh_passphrase` **Usage Options:** 1. Create one connector per HyperAI instance with different SSH keys. 2. Use a single SSH key for multiple instances, selecting the instance during orchestrator component creation. #### Auto-configuration The Service Connector does not support auto-discovery of authentication credentials from HyperAI instances. Feedback on this feature is welcome via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). #### Stack Components Usage The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service-connector.md === ### Summary of Docker Service Connector Documentation The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients for these registries. It provides pre-authenticated `python-docker` clients to linked Stack Components. #### Key Commands - **List Connector Types:** ```shell zenml service-connector list-types --type docker ``` Output indicates the availability of the Docker Service Connector with authentication via password. #### Prerequisites - No additional Python packages are needed; all are included in the ZenML package. - Docker must be installed in environments where container images are built and pushed. #### Resource Types - Supports Docker/OCI container registries identified by the `docker-registry` resource type. - Formats for resource names: - DockerHub: `docker.io` or `https://index.docker.io/v1/` - OCI registry: `https://host:port/` #### Authentication Methods - Authentication is via username and password or access token; using API tokens is recommended. - **Register DockerHub Connector:** ```sh zenml service-connector register dockerhub --type docker -in ``` Prompts for service connector name, description, type, and authentication details. #### Important Notes - Credentials configured in the Service Connector are distributed directly to clients; short-lived credentials are not supported. - Auto-discovery of credentials from local Docker clients is not available. #### Local Client Provisioning - Configure local Docker client with: ```sh zenml service-connector login dockerhub ``` Warning about unencrypted password storage will be displayed. #### Stack Components Usage - The connector can be used by all Container Registry stack components to authenticate to remote registries, facilitating the building and publishing of container images without explicit Docker credentials in the environment. #### Future Enhancements - Automatic configuration of Docker credentials in container runtimes (e.g., Kubernetes) is planned for future releases. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/gcp-service-connector.md === ### Summary of GCP Service Connector Documentation The **GCP Service Connector** in ZenML enables connection to various GCP resources like GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods, including GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication, with a focus on issuing short-lived OAuth 2.0 tokens for enhanced security. #### Key Features: - **Authentication Methods**: - **Implicit Authentication**: Uses Application Default Credentials (ADC) without explicit configuration. Requires enabling via environment variables. - **User Account**: Uses long-lived credentials, generating temporary OAuth 2.0 tokens by default. - **Service Account**: Requires a service account key JSON, generating temporary tokens. - **Service Account Impersonation**: Generates temporary STS credentials by impersonating another service account. - **External Account**: Utilizes GCP workload identity federation for authentication using AWS IAM or Azure AD credentials. - **OAuth 2.0 Token**: Requires manual token management, suitable for short-term access. #### Resource Types: - **Generic GCP Resource**: For general GCP service access. - **GCS Bucket**: Requires specific permissions for accessing GCS. - **GKE Cluster**: Requires permissions to list and get cluster details. - **GAR/GCR Registry**: Supports both Google Artifact Registry and legacy Google Container Registry. #### Prerequisites: - Install the GCP Service Connector via: ```bash pip install "zenml[connectors-gcp]" ``` or ```bash zenml integration install gcp ``` - GCP CLI installation is recommended for auto-configuration. #### Example Commands: - List available GCP Service Connector types: ```bash zenml service-connector list-types --type gcp ``` - Register a GCP Service Connector with auto-configuration: ```bash zenml service-connector register gcp-auto --type gcp --auto-configure ``` - Verify a service connector: ```bash zenml service-connector verify gcp-user-account --resource-type kubernetes-cluster ``` #### Local Client Provisioning: - The local `gcloud`, `kubectl`, and Docker CLI can be configured with credentials from the GCP Service Connector, allowing seamless access to GCP resources. #### Stack Components Use: - The GCP Service Connector can connect various ZenML stack components, including GCS Artifact Store, Kubernetes Orchestrator, and GCP Container Registry, facilitating a streamlined workflow without manual credential management. #### End-to-End Examples: - **GKE Kubernetes Orchestrator**: Connects to a GKE cluster, GCS Artifact Store, and GCR using a multi-type GCP Service Connector. - **VertexAI Orchestrator**: Uses individual service connectors for GCS, GCR, and Vertex AI resources. This documentation provides a comprehensive guide for configuring and utilizing the GCP Service Connector within ZenML, ensuring secure and efficient access to GCP resources. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/best-security-practices.md === # Summary of Best Practices for Authentication Methods in Service Connectors ## Overview Service Connectors for cloud providers support various authentication methods. While no single standard exists, identifiable patterns can guide the selection of appropriate methods. Understanding these methods requires some knowledge of authentication and authorization. ## Authentication Methods ### Username and Password - **Avoid using primary account passwords** as credentials. Use alternatives like session tokens, API keys, or API tokens. - Passwords are the least secure method and should not be shared or used for automated workloads. Cloud platforms typically require exchanging passwords for long-lived credentials. ### Implicit Authentication - Provides immediate access to cloud resources using locally stored credentials, configuration files, or environment variables. - **Security Risk**: Can grant access to resources configured for the ZenML Server. Disabled by default; must be enabled via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. - Works with cloud-specific metadata services (e.g., AWS EC2, GCP service accounts, Azure Managed Identity). ### Long-lived Credentials (API Keys, Account Keys) - Preferred for production use, especially when sharing results. They are exchanged for temporary tokens or used with impersonation methods. - Different cloud providers have various long-lived credential types (e.g., AWS Access Keys, GCP Service Account Credentials). - **User Credentials**: Tied to human users; should not be shared. - **Service Credentials**: Used for automated processes; better for sharing due to restricted permissions. ### Generating Temporary and Down-scoped Credentials - Temporary credentials are issued to clients, keeping long-lived credentials secure on the server. - **Example**: AWS Service Connector can issue session tokens that expire after a set duration. ### Impersonating Accounts and Assuming Roles - Requires setup of multiple accounts/roles for flexibility and control. - Long-lived credentials are used to obtain short-lived tokens with specific permissions, enhancing security. ### Short-lived Credentials - Temporary credentials can be manually configured or generated during auto-configuration. - Useful for granting temporary access without exposing long-lived credentials but can lead to Service Connector unusability upon expiration. ## Example Commands - **GCP Implicit Authentication**: ```sh zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core ``` - **AWS Long-lived Credentials**: ```sh zenml service-connector register aws-federation-multi --type aws --auth-method=federation-token --auto-configure ``` - **GCP Account Impersonation**: ```sh zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl ``` This summary encapsulates the essential best practices and technical details regarding authentication methods in Service Connectors, ensuring that critical information is preserved while maintaining conciseness. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === ### Summary of AWS Service Connector Documentation for ZenML **Overview**: The AWS Service Connector in ZenML allows seamless integration with AWS resources such as S3 buckets, EKS clusters, and ECR registries. It supports multiple authentication methods including AWS secret keys, IAM roles, STS tokens, and implicit authentication. #### Key Features: - **Authentication Methods**: - **Implicit Authentication**: Uses environment variables or IAM roles. Requires enabling via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. - **AWS Secret Key**: Long-lived credentials for development; not recommended for production. - **AWS STS Token**: Temporary tokens that require regular updates. - **AWS IAM Role**: Generates temporary STS credentials by assuming a role. - **AWS Session Token**: Generates temporary tokens for IAM users. - **AWS Federation Token**: Generates tokens for federated users. - **Resource Types**: - **Generic AWS Resource**: Connects to any AWS service using a pre-configured boto3 session. - **S3 Bucket**: Requires specific IAM permissions (e.g., `s3:ListBucket`, `s3:GetObject`). - **EKS Cluster**: Requires permissions like `eks:ListClusters`. - **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories`. #### Configuration Commands: - **List Connector Types**: ```shell zenml service-connector list-types --type aws ``` - **Register a Service Connector**: ```shell zenml service-connector register -i --type aws ``` - **Verify Access to Resources**: ```shell zenml service-connector verify --resource-type ``` #### Auto-Configuration: - Automatically fetches credentials from the AWS CLI. Use the following command: ```shell AWS_PROFILE= zenml service-connector register --type aws --auto-configure ``` #### Local Client Provisioning: - Local AWS CLI, Kubernetes `kubectl`, and Docker CLI can be configured with credentials from the AWS Service Connector. The local AWS CLI profile is named based on the Service Connector UUID. #### Example Workflow: 1. **Register Service Connector**: ```shell AWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure ``` 2. **Register and Connect Stack Components**: - **S3 Artifact Store**: ```shell zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-demo-multi ``` - **Kubernetes Orchestrator**: ```shell zenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads zenml orchestrator connect eks-zenml-zenhacks --connector aws-demo-multi ``` - **ECR Container Registry**: ```shell zenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com zenml container-registry connect ecr-us-east-1 --connector aws-demo-multi ``` 3. **Run a Simple Pipeline**: ```python from zenml import pipeline, step @step def step_1() -> str: return "world" @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> None: print(f"{input_one} {input_two}") @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) if __name__ == "__main__": my_pipeline() ``` This summary captures the essential details of configuring and using the AWS Service Connector with ZenML while maintaining the integrity of the technical information. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === ### Summary of ZenML Service Connectors Documentation **Overview:** ZenML facilitates the connection of MLOps pipelines to various cloud providers and infrastructure services (AWS, GCP, Azure, Kubernetes, etc.) through **Service Connectors**. These connectors simplify authentication and authorization, enhancing security and usability. **Key Points:** - **Service Connectors** abstract the complexity of managing credentials and security best practices, allowing seamless connections to external resources without embedding sensitive information directly in code. - **Use Case Example:** Connecting ZenML to an AWS S3 bucket using the AWS Service Connector: - Registering an S3 Artifact Store: ```sh zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME ``` - **Alternatives to Service Connectors:** 1. Embedding credentials directly in Stack Components (not recommended). 2. Using ZenML secrets to store credentials. 3. Referencing secrets in configurations (limited support across Stack Components). - **Drawbacks of Alternatives:** - Security risks from long-lived credentials. - Portability issues with Kubernetes and SDK dependencies. - Lack of validation for credentials during runtime. - **Service Connector Benefits:** - Credentials are validated and managed on the ZenML server. - Generates short-lived credentials for client access, reducing security risks. - Multiple Stack Components can share the same Service Connector. - **Finding Resource Types:** To list available Service Connector types: ```sh zenml service-connector list-types ``` - **Describing a Service Connector Type:** Example for AWS: ```sh zenml service-connector describe-type aws ``` - **Registering a Service Connector:** To register an AWS Service Connector with auto-configuration: ```sh zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket ``` - **Connecting Stack Components:** To connect an S3 Artifact Store to a registered Service Connector: ```sh zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-s3 ``` - **Example Pipeline:** A simple ZenML pipeline can be defined as follows: ```python from zenml import step, pipeline @step def simple_step_one() -> str: return "Hello World!" @step def simple_step_two(msg: str) -> None: print(msg) @pipeline def simple_pipeline() -> None: message = simple_step_one() simple_step_two(msg=message) if __name__ == "__main__": simple_pipeline() ``` - **Security Best Practices:** ZenML emphasizes using temporary credentials and managing permissions effectively to enhance security. **Additional Resources:** - [Service Connector Guide](./service-connectors-guide.md) - [Security Best Practices](./best-security-practices.md) - [AWS Service Connector](./aws-service-connector.md) - [GCP Service Connector](./gcp-service-connector.md) - [Azure Service Connector](./azure-service-connector.md) This summary encapsulates the essential details of ZenML's Service Connectors, focusing on their purpose, usage, and benefits while maintaining critical technical information. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/reference-secrets-in-stack-configuration.md === ### Summary: Reference Secrets in Stack Configuration **Overview**: This documentation explains how to securely reference secrets in ZenML stack components, which is essential for handling sensitive information like passwords and tokens. **Secret Reference Syntax**: Use the format `{{.}}` to reference secrets in stack component attributes. **Example (CLI)**: ```shell # Create a secret for MLflow authentication zenml secret create mlflow_secret --username=admin --password=abc123 # Register the experiment tracker with secret references zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` **Validation**: ZenML validates the existence of referenced secrets before running a pipeline to prevent runtime failures. The validation level can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. - `SECRET_AND_KEY_EXISTS`: (default) Validates both the existence of secrets and their key-value pairs. **Fetching Secret Values in Steps**: For centralized secrets management, secrets can be accessed in steps using the ZenML `Client` API. **Example (Python)**: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: """Load the example secret from the server.""" secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` **Additional Resources**: For more details on managing secrets, refer to the [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md) documentation. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === ### Export Stack Requirements To obtain the `pip` requirements for a specific ZenML stack, use the following CLI command: ```bash zenml stack export-requirements ``` For installation, it's recommended to output the requirements to a file and then install them: ```bash zenml stack export-requirements --output-file stack_requirements.txt pip install -r stack_requirements.txt ``` For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md === # Custom Stack Component Flavor Implementation in ZenML ## Overview ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide explains component flavors, core abstractions, and the steps to implement a custom flavor. ## Component Flavors - **Component Type**: A broad category defining functionality (e.g., `artifact_store`). - **Flavor**: Specific implementations of a component type (e.g., `local`, `s3`). ## Core Abstractions 1. **StackComponent**: Defines core functionality. ```python from zenml.stack import StackComponent class BaseArtifactStore(StackComponent): @abstractmethod def open(self, path, mode="r"): pass @abstractmethod def exists(self, path): pass ``` 2. **StackComponentConfig**: Configures a stack component instance. ```python from zenml.stack import StackComponentConfig class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] ``` 3. **Flavor**: Combines the implementation and configuration classes. ```python from zenml.stack import Flavor class LocalArtifactStoreFlavor(Flavor): @property def name(self) -> str: return "local" @property def config_class(self) -> Type[LocalArtifactStoreConfig]: return LocalArtifactStoreConfig @property def implementation_class(self) -> Type[LocalArtifactStore]: return LocalArtifactStore ``` ## Implementing a Custom Flavor ### Step 1: Define Configuration Class Define `SUPPORTED_SCHEMES` and additional configuration values. ```python from zenml.artifact_stores import BaseArtifactStoreConfig from zenml.utils.secret_utils import SecretField class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) secret: Optional[str] = SecretField(default=None) ``` ### Step 2: Implement the Class Implement the abstract methods using S3. ```python import s3fs from zenml.artifact_stores import BaseArtifactStore class MyS3ArtifactStore(BaseArtifactStore): _filesystem: Optional[s3fs.S3FileSystem] = None @property def filesystem(self) -> s3fs.S3FileSystem: if not self._filesystem: self._filesystem = s3fs.S3FileSystem( key=self.config.key, secret=self.config.secret, ) return self._filesystem def open(self, path, mode="r"): return self.filesystem.open(path=path, mode=mode) def exists(self, path): return self.filesystem.exists(path=path) ``` ### Step 3: Define the Flavor Combine the configuration and implementation classes. ```python from zenml.artifact_stores import BaseArtifactStoreFlavor class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): @property def name(self): return 'my_s3_artifact_store' @property def implementation_class(self): return MyS3ArtifactStore @property def config_class(self): return MyS3ArtifactStoreConfig ``` ## Registering the Flavor Use the ZenML CLI to register the new flavor. ```shell zenml artifact-store flavor register ``` ## Usage After registration, use the flavor in your stacks: ```shell zenml artifact-store register \ --flavor=my_s3_artifact_store \ --path='some-path' zenml stack register \ --artifact-store ``` ## Best Practices - Execute `zenml init` consistently. - Test flavors thoroughly before production use. - Keep code clean and well-documented. - Reference existing flavors for development. ## Further Learning For specific stack components, refer to the ZenML documentation for detailed guides on implementing custom flavors for various component types. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === # Deploy a Cloud Stack with Terraform ZenML provides [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks, enhancing machine learning infrastructure deployment. ## Pre-requisites - A reachable ZenML server instance (not local). - Create a service account and API key for Terraform access: ```shell zenml service-account create ``` - Required on the machine running Terraform: - [Terraform](https://www.terraform.io/downloads.html) (version 1.9+). - Authenticated with your cloud provider via its CLI/SDK. ## Using Terraform Modules 1. Set up the ZenML provider with your server URL and API key using environment variables: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="" ``` 2. Create a `main.tf` configuration file: ```hcl terraform { required_providers { aws = { source = "hashicorp/aws" } zenml = { source = "zenml-io/zenml" } } } provider "zenml" {} module "zenml_stack" { source = "zenml-io/zenml-stack/" zenml_stack_name = "" orchestrator = "" } output "zenml_stack_id" { value = module.zenml_stack.zenml_stack_id } output "zenml_stack_name" { value = module.zenml_stack.zenml_stack_name } ``` 3. Run Terraform commands: ```shell terraform init terraform apply ``` 4. Confirm changes by typing `yes` when prompted. 5. Use the created ZenML stack: ```shell zenml integration install zenml stack set ``` ## Cloud Provider Specifics ### AWS - **Authentication**: Install [AWS CLI](https://aws.amazon.com/cli/) and run `aws configure`. - **Example Configuration**: ```hcl provider "aws" { region = "eu-central-1" } ``` - **Components**: S3 Artifact Store, ECR, and various orchestrators (local, SageMaker, SkyPilot). ### GCP - **Authentication**: Install [gcloud CLI](https://cloud.google.com/sdk/gcloud) and run `gcloud init`. - **Example Configuration**: ```hcl provider "google" { region = "europe-west3"; project = "my-project" } ``` - **Components**: GCS Artifact Store, Google Artifact Registry, and orchestrators (local, Vertex AI, SkyPilot, Airflow). ### Azure - **Authentication**: Install [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/) and run `az login`. - **Example Configuration**: ```hcl provider "azurerm" { features { resource_group { prevent_deletion_if_contains_resources = false } } } ``` - **Components**: Azure Storage Account, ACR, and orchestrators (local, SkyPilot, AzureML). ## Clean Up To remove all provisioned resources and the ZenML stack: ```shell terraform destroy ``` This documentation provides a comprehensive overview of deploying a cloud stack using Terraform with ZenML, including prerequisites, configuration, and cloud provider specifics. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md === # Deploy a Cloud Stack with ZenML ZenML allows you to deploy a cloud stack with a single click, simplifying the process of configuring your infrastructure. This feature is particularly useful for remote settings, where deploying infrastructure can be complex and time-consuming. ## Getting Started To use the 1-click deployment tool, you need a deployed instance of ZenML (not a local server). You can set up ZenML by following the [deployment guide](../../../getting-started/deploying-zenml/README.md). ### Deployment Options You can deploy your stack via the ZenML dashboard or the CLI. #### Dashboard Deployment 1. Navigate to the stacks page and click "+ New Stack". 2. Select "New Infrastructure". 3. Choose your cloud provider (AWS, GCP, or Azure). **AWS Deployment:** - Select a region and stack name. - Click "Deploy in AWS" to redirect to AWS CloudFormation. - Log in, review, and create the stack. **GCP Deployment:** - Select a region and stack name. - Click "Deploy in GCP" to start a Cloud Shell session. - Review the ZenML GitHub repository and trust it. - Authenticate with GCP, configure deployment, and run the provided script. **Azure Deployment:** - Select a location and stack name. - Click "Deploy in Azure" to start a Cloud Shell session. - Paste the `main.tf` content and run `terraform init --upgrade` and `terraform apply`. #### CLI Deployment Use the following command to deploy via CLI: ```shell zenml stack deploy -p {aws|gcp|azure} ``` ### Infrastructure Overview **AWS Resources:** - S3 bucket (Artifact Store) - ECR (Container Registry) - CloudBuild project (Image Builder) - IAM roles for SageMaker access **GCP Resources:** - GCS bucket (Artifact Store) - GCP Artifact Registry (Container Registry) - Vertex AI (Orchestrator and Step Operator) **Azure Resources:** - Azure Resource Group - Azure Storage Account (Artifact Store) - Azure Container Registry (Container Registry) - AzureML Workspace (Orchestrator and Step Operator) ### Permissions **AWS Permissions:** - S3, ECR, CloudBuild, and SageMaker permissions for the IAM user and role. **GCP Permissions:** - GCS, Artifact Registry, Vertex AI, and Cloud Build permissions for the GCP service account. **Azure Permissions:** - Storage, Container Registry, and AzureML Workspace permissions for the Azure service principal. With this streamlined process, you can deploy a cloud stack and start running your pipelines in a remote environment with ease. For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === ### Managing Stacks & Components #### What is a Stack? A **stack** in the ZenML framework represents the infrastructure and tooling configuration for executing pipelines. It consists of various components, each responsible for specific tasks, such as: - **Container Registry**: For managing images. - **Kubernetes Cluster**: Serves as an orchestrator. - **Artifact Store**: For storing artifacts. - **Experiment Tracker**: Like MLflow for tracking experiments. #### Organizing Execution Environments ZenML allows running pipelines across multiple stacks, facilitating testing in different environments: - **Local Development**: Data scientists can experiment locally. - **Staging**: Testing advanced features in a cloud environment. - **Production**: Final deployment on a production-grade stack. **Benefits**: - Prevents accidental staging deployments to production. - Reduces costs by using less powerful resources in staging. - Controls access by restricting permissions to specific stacks. #### Managing Credentials Most stack components require credentials to interact with infrastructure. The recommended method is using **Service Connectors**, which abstract sensitive information and enhance security. **Recommended Roles**: - Limit Service Connector creation to individuals with direct cloud resource access to reduce credential leakage risk and simplify auditing. **Recommended Workflow**: 1. A small group creates Service Connectors. 2. Use one connector for development/staging. 3. Create a separate connector for production to avoid accidental resource usage. #### Deploying and Managing Stacks Deploying MLOps stacks involves several challenges: - Each tool has specific requirements (e.g., a Kubernetes cluster for Kubeflow). - Setting default infrastructure parameters can be complex. - Tools may require additional configurations for secure setups. - Components must have appropriate permissions to communicate. - Resource cleanup post-experimentation is crucial to avoid unnecessary costs. #### Key Documentation Links - [Deploy a Cloud Stack](./deploy-a-cloud-stack.md) - [Register a Cloud Stack](./register-a-cloud-stack.md) - [Deploy a Cloud Stack with Terraform](./deploy-a-cloud-stack-with-terraform.md) - [Export and Install Stack Requirements](./export-stack-requirements.md) - [Reference Secrets in Stack Configuration](./reference-secrets-in-stack-configuration.md) - [Implement a Custom Stack Component](./implement-a-custom-stack-component.md) This documentation provides essential guidance for provisioning, configuring, and extending stacks and components in ZenML. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === ### Summary of ZenML Stack Registration Documentation **Overview**: ZenML's stack represents infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure and defining components in ZenML. The stack wizard simplifies this by allowing users to register a ZenML cloud stack using existing infrastructure. **Deployment Options**: - **1-Click Deployment Tool**: For users without existing infrastructure. - **Terraform Modules**: For users who want to manage infrastructure as code. ### Using the Stack Wizard **Access**: - **Dashboard**: Available through the stacks page. Click "+ New Stack" and select "Use existing Cloud". - **CLI**: Use the command: ```shell zenml stack register -p {aws|gcp|azure} ``` **Service Connector**: Required to register a cloud stack. You can use an existing connector or create a new one. **Auto-Configuration**: The wizard checks for existing credentials in the local environment and offers to use them or configure manually. ### Authentication Methods by Cloud Provider #### AWS - **Options**: - AWS Secret Key - AWS STS Token - AWS IAM Role - AWS Session Token - AWS Federation Token - **Required Fields**: Varies by method, typically includes `aws_access_key_id`, `aws_secret_access_key`, and `region`. #### GCP - **Options**: - GCP User Account - GCP Service Account - GCP External Account - GCP OAuth 2.0 Token - GCP Service Account Impersonation - **Required Fields**: Includes `user_account_json` or `service_account_json`, and `project_id`. #### Azure - **Options**: - Azure Service Principal - Azure Access Token - **Required Fields**: Includes `client_secret`, `tenant_id`, and `client_id`. ### Defining Stack Components You will define three major components: 1. **Artifact Store** 2. **Orchestrator** 3. **Container Registry** For each component, you can choose to: - Reuse existing components connected via the service connector. - Create new components from available resources. ### Conclusion Using the stack wizard, users can efficiently register a cloud stack and begin running pipelines in a remote setting. ================================================== === File: docs/book/how-to/control-logging/enable-or-disable-logs-storing.md === # ZenML Logging Configuration ZenML captures logs during step execution using a logging handler. Users can utilize the standard Python logging module or print statements, which ZenML will log and store. ## Example Code ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") print("World.") ``` Logs are stored in the artifact store of your stack and can be viewed on the dashboard. Note: Logs are not visible if not connected to a cloud artifact store with a service connector. For more details, refer to the [log viewing documentation](./view-logs-on-the-dasbhoard.md). ## Disabling Log Storage 1. **Using Decorators**: - Disable logging for a step: ```python @step(enable_step_logs=False) def my_step() -> None: ... ``` - Disable logging for an entire pipeline: ```python @pipeline(enable_step_logs=False) def my_pipeline(): ... ``` 2. **Using Environment Variable**: Set `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment. This variable takes precedence over decorator parameters. Example: ```python docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() ``` This configuration allows users to manage log storage effectively based on their needs. ================================================== === File: docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md === # Viewing Logs on the Dashboard ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will store. ## Example Code: ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") # Use the logging module. print("World.") # Use print statements as well. ``` Logs are stored in the artifact store of your stack and can be viewed on the dashboard only if the ZenML server has access to the artifact store. This is true in two scenarios: 1. **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. 2. **Deployed ZenML Server**: Logs from a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. For configuration details, refer to the production guide on [remote artifact stores](../../user-guide/production-guide/remote-storage.md). If logs are configured correctly, they will display on the dashboard. **Note**: To disable log storage due to performance or storage constraints, follow the provided instructions [here](./enable-or-disable-logs-storing.md). ================================================== === File: docs/book/how-to/control-logging/disable-rich-traceback.md === ### How to Disable Rich Traceback Output in ZenML By default, ZenML utilizes the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output, aiding in debugging. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` This change will result in plain text traceback output. Note that this setting only affects local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the environment variable in the pipeline run environment: ```python docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` For further details, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/how-to/control-logging/disable-colorful-logging.md === ### Disable Colorful Logging in ZenML By default, ZenML enables colorful logging for enhanced readability. To disable this feature, set the following environment variable: ```bash ZENML_LOGGING_COLORS_DISABLED=true ``` Setting this variable in the client environment (e.g., local machine) will disable colorful logging for both local and remote pipeline runs. To disable it only locally while enabling it for remote runs, set the variable in the pipeline run environment: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) # Add to the decorator @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Or configure pipeline options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` For more information, refer to the latest ZenML documentation [here](https://docs.zenml.io). ================================================== === File: docs/book/how-to/control-logging/set-logging-verbosity.md === ### Summary: Setting Logging Verbosity in ZenML By default, ZenML logging verbosity is set to `INFO`. To change this, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` Available options include `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that setting this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To control logging verbosity for remote runs, set the variable in the pipeline's environment: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` For further details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/control-logging/README.md === ### Configuring ZenML's Default Logging Behavior ZenML generates different types of logs across various environments: 1. **ZenML Server Logs**: Produced by the FastAPI server. 2. **Client or Runner Logs**: Generated during pipeline execution, capturing events before, during, and after a pipeline run. 3. **Execution Environment Logs**: Created at the orchestrator level while executing pipeline steps, typically using Python's `logging` module. This section outlines how users can manage logging behavior in these environments. ================================================== === File: docs/book/how-to/data-artifact-management/README.md === # Data and Artifact Management in ZenML This section outlines the management of data and artifacts within ZenML, focusing on key functionalities and processes. ## Key Concepts - **Data Management**: Involves handling datasets used in machine learning workflows, ensuring they are accessible, versioned, and reproducible. - **Artifact Management**: Refers to the storage and retrieval of outputs generated during the ML pipeline, such as models, metrics, and visualizations. ## Important Features 1. **Versioning**: ZenML supports version control for datasets and artifacts, allowing users to track changes and revert to previous states. 2. **Storage Backends**: ZenML integrates with various storage solutions (e.g., S3, GCS) for efficient data and artifact storage. 3. **Data Validation**: Ensures the integrity and quality of datasets before processing, using built-in validation checks. 4. **Artifact Tracking**: Automatically logs artifacts produced during pipeline execution, facilitating easy access and reproducibility. ## Code Example Here’s a simplified example of how to manage data and artifacts in ZenML: ```python from zenml import pipeline @pipeline def my_pipeline(): data = load_data() processed_data = preprocess(data) model = train_model(processed_data) save_artifact(model) # Execute the pipeline my_pipeline.run() ``` ## Conclusion Effective data and artifact management in ZenML enhances reproducibility and collaboration in machine learning projects, ensuring that all components are systematically organized and easily retrievable. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/disabling-visualizations.md === ### Disabling Visualizations in ZenML To disable artifact visualization in ZenML, set `enable_artifact_visualization` to `False` at the pipeline or step level: ```python @step(enable_artifact_visualization=False) def my_step(): ... @pipeline(enable_artifact_visualization=False) def my_pipeline(): ... ``` For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/creating-custom-visualizations.md === # Creating Custom Visualizations in ZenML ## Supported Visualization Types ZenML supports the following visualization types: - **HTML:** Embedded HTML visualizations (e.g., data validation reports). - **Image:** Visualizations of image data (e.g., Pillow images). - **CSV:** Tables (e.g., pandas DataFrame output). - **Markdown:** Markdown strings or pages. - **JSON:** JSON strings or objects. ## Methods to Add Custom Visualizations 1. **Special Return Types:** Cast HTML, Markdown, CSV, or JSON data to specific classes in your step. 2. **Custom Materializers:** Define visualization logic for data types by building a custom materializer. 3. **Custom Return Type Class:** Create a custom return type with a corresponding materializer. ### Visualization via Special Return Types To visualize data, return the appropriate type from your step: - `zenml.types.HTMLString` - `zenml.types.MarkdownString` - `zenml.types.CSVString` - `zenml.types.JSONString` **Example: CSV Visualization** ```python from zenml.types import CSVString @step def my_step() -> CSVString: return CSVString("a,b,c\n1,2,3") ``` **Example: Matplotlib Visualization** ```python import matplotlib.pyplot as plt import base64 import io from zenml.types import HTMLString from zenml import step, pipeline @step def create_matplotlib_visualization() -> HTMLString: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') buf = io.BytesIO() fig.savefig(buf, format='png', bbox_inches='tight', dpi=300) plt.close(fig) image_base64 = base64.b64encode(buf.getvalue()).decode('utf-8') html = f'
' return HTMLString(html) @pipeline def visualization_pipeline(): create_matplotlib_visualization() if __name__ == "__main__": visualization_pipeline() ``` ## Visualization via Materializers To visualize all artifacts of a certain type, override the `save_visualizations()` method in a custom materializer. ### Example: Matplotlib Figure Visualization 1. **Custom Class:** ```python from pydantic import BaseModel class MatplotlibVisualization(BaseModel): figure: Any ``` 2. **Materializer:** ```python class MatplotlibMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MatplotlibVisualization,) def save_visualizations(self, data: MatplotlibVisualization) -> Dict[str, VisualizationType]: visualization_path = os.path.join(self.uri, "visualization.png") with fileio.open(visualization_path, 'wb') as f: data.figure.savefig(f, format='png', bbox_inches='tight') return {visualization_path: VisualizationType.IMAGE} ``` 3. **Step:** ```python @step def create_matplotlib_visualization() -> MatplotlibVisualization: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') return MatplotlibVisualization(figure=fig) ``` ### Workflow 1. The step creates and returns a `MatplotlibVisualization`. 2. ZenML identifies the `MatplotlibMaterializer` and calls `save_visualizations()`. 3. The figure is saved as a PNG in the artifact store. 4. The dashboard displays the PNG when viewing the artifact. For further examples, refer to the Hugging Face datasets materializer in the ZenML GitHub repository. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/types-of-visualizations.md === ### Types of Visualizations in ZenML ZenML automatically saves visualizations for various data types, accessible via the ZenML dashboard or Jupyter notebooks using the `artifact.visualize()` method. **Default Visualizations Include:** - Statistical representation of a Pandas DataFrame as a PNG image. - Drift detection reports from: - Evidently - Great Expectations - Whylogs - A Hugging Face datasets viewer embedded as an HTML iframe. For more details, refer to the [latest ZenML documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === ### Summary: Displaying Visualizations in the ZenML Dashboard To display visualizations on the ZenML dashboard, the following steps are necessary: 1. **Service Connector Configuration**: - Visualizations are stored in the artifact store. The ZenML server must have access to this store to display visualizations. - Refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md) for configuration details. For AWS S3, see the [S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). 2. **Local Artifact Store Limitation**: - When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in visualizations not being displayed. Use a service connector with a remote artifact store for visualization access. 3. **Artifact Store Configuration**: - If visualizations from a pipeline run are missing, check if the ZenML server has the necessary dependencies and permissions for the artifact store. More details can be found in the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). For the latest documentation, visit [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === ### ZenML Data Visualization Configuration **Overview**: This documentation covers how to configure ZenML for displaying data visualizations in its dashboard. **Visualizing Artifacts**: ZenML allows for easy association of visualizations with data artifacts. ![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) For more details, refer to the ZenML dashboard documentation. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md === ### Summary of ZenML Artifact Registration Documentation This documentation explains how to register external data as ZenML artifacts for future use, focusing on registering folders, files, and model checkpoints from PyTorch Lightning training runs. #### Registering Existing Data 1. **Register Existing Folder as a ZenML Artifact**: - You can register an entire folder containing data as a ZenML artifact. ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path preexisting_folder = os.path.join(prefix, f"my_test_folder_{uuid4()}") os.mkdir(preexisting_folder) with open(os.path.join(preexisting_folder, "test_file.txt"), "w") as f: f.write("test") register_artifact(folder_or_file_uri=preexisting_folder, name="my_folder_artifact") temp_artifact_folder_path = Client().get_artifact_version(name_id_or_prefix="my_folder_artifact").load() ``` 2. **Register Existing File as a ZenML Artifact**: - You can also register a single file. ```python preexisting_file = os.path.join(preexisting_folder, "test_file.txt") register_artifact(folder_or_file_uri=preexisting_file, name="my_file_artifact") temp_artifact_file_path = Client().get_artifact_version(name_id_or_prefix="my_file_artifact").load() ``` #### Registering Checkpoints from PyTorch Lightning 1. **Register All Checkpoints**: - Use the `ModelCheckpoint` callback to register all checkpoints during a training run. ```python from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint trainer = Trainer(callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1)]) trainer.fit(model) register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` 2. **Register Checkpoints as Separate Artifact Versions**: - Extend the `ModelCheckpoint` to register each checkpoint as a separate artifact version. ```python class ZenMLModelCheckpoint(ModelCheckpoint): def on_train_epoch_end(self, trainer, pl_module): super().on_train_epoch_end(trainer, pl_module) register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) ``` #### Full Example: PyTorch Lightning Training with Checkpoint Linkage The documentation provides a complete example of a pipeline that trains a PyTorch Lightning model and registers checkpoints as artifacts. ```python @step def train_model(model: LightningModule, train_loader: DataLoader, epochs: int = 1, artifact_name: str = "my_model_ckpts"): chkpt_cb = ZenMLModelCheckpoint(artifact_name=artifact_name) trainer = Trainer(default_root_dir=chkpt_cb.default_root_dir, callbacks=[chkpt_cb]) trainer.fit(model, train_loader) @pipeline(model=Model(name="LightningDemo")) def train_pipeline(artifact_name: str = "my_model_ckpts"): train_loader = get_data() model = get_model() train_model(model, train_loader, 10, artifact_name) predict(get_pipeline_context().model.get_artifact(artifact_name), after=["train_model"]) ``` This example demonstrates how to integrate data loading, model training, and artifact registration into a cohesive pipeline using ZenML and PyTorch Lightning. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md === ### Structuring an MLOps Project #### Overview An MLOps project typically consists of multiple pipelines, such as: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. - **Inference Pipeline**: Runs predictions on trained models. - **Deployment Pipeline**: Deploys models to production. The structure of these pipelines can vary based on project requirements, and sharing artifacts (models, datasets, metadata) between them is essential. #### Artifact Exchange Patterns **Pattern 1: Artifact Exchange through `Client`** - Use the ZenML Client to transfer artifacts between pipelines. - Example: A feature engineering pipeline produces datasets that are fetched in the training pipeline. ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` *Note*: The artifacts are references, not materialized in memory during the pipeline function. **Pattern 2: Artifact Exchange through a `Model`** - Use ZenML Models as references for artifacts. - Example: A training pipeline (`train_and_promote`) produces models, which are then used in an inference pipeline (`do_predictions`). ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` *Note*: Disabling caching is crucial to avoid unexpected behavior. Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model.get_model_artifact("trained_model") inference_data = load_data() predict(model=model, data=inference_data) if __name__ == "__main__": do_predictions() ``` Both approaches are valid; the choice depends on user preference. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/datasets.md === # Summary of Custom Dataset Classes and Complex Data Flows in ZenML ## Overview ZenML allows for the creation of custom Dataset classes to manage complex data flows and various data sources in machine learning projects. This documentation covers the implementation of these classes and their associated Materializers. ## Custom Dataset Classes Custom Dataset classes encapsulate data loading, processing, and saving logic. They are useful for: 1. Handling multiple data sources (e.g., CSV, databases). 2. Managing complex data structures. 3. Implementing custom data processing. ### Example Implementation A base `Dataset` class is defined, with specific implementations for CSV and BigQuery datasets: ```python from abc import ABC, abstractmethod import pandas as pd from google.cloud import bigquery class Dataset(ABC): @abstractmethod def read_data(self) -> pd.DataFrame: pass class CSVDataset(Dataset): def __init__(self, data_path: str): self.data_path = data_path self.df = None def read_data(self) -> pd.DataFrame: if self.df is None: self.df = pd.read_csv(self.data_path) return self.df class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): self.table_id = table_id self.project = project self.client = bigquery.Client(project=self.project) def read_data(self) -> pd.DataFrame: query = f"SELECT * FROM `{self.table_id}`" return self.client.query(query).to_dataframe() def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config).result() ``` ## Custom Materializers Materializers handle the serialization and deserialization of artifacts. Custom Materializers are necessary for custom Dataset classes: ### CSVDataset Materializer ```python class CSVDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (CSVDataset,) def load(self, data_type: Type[CSVDataset]) -> CSVDataset: # Load CSV data dataset = CSVDataset(temp_path) dataset.read_data() return dataset def save(self, dataset: CSVDataset) -> None: # Save DataFrame to CSV dataset.df.to_csv(temp_path, index=False) ``` ### BigQueryDataset Materializer ```python class BigQueryDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (BigQueryDataset,) def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: # Load metadata and create dataset return BigQueryDataset(metadata["table_id"], metadata["project"]) def save(self, bq_dataset: BigQueryDataset) -> None: # Save metadata json.dump(metadata, f) bq_dataset.write_data() ``` ## Pipeline Management Designing flexible pipelines is essential when working with multiple data sources. Below is an example of an ETL pipeline: ```python @pipeline def etl_pipeline(mode: str = "develop"): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") transformed_data = transform(raw_data) ``` ## Best Practices 1. **Common Base Class**: Use a base `Dataset` class to standardize handling of data sources. 2. **Specialized Steps**: Implement distinct steps for loading different datasets while keeping processing steps uniform. 3. **Flexible Pipelines**: Use configuration parameters to adapt pipelines to various data sources. 4. **Modular Design**: Create steps that focus on specific tasks to promote code reuse and maintainability. By following these practices, ZenML pipelines can efficiently manage complex data flows and adapt to evolving project requirements. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === ### Summary: Managing Big Data with ZenML This documentation outlines strategies for scaling ZenML pipelines to handle large datasets in machine learning projects. It categorizes datasets by size and provides specific techniques for each category. #### Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory with pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. #### Strategies for Small Datasets - **Efficient Data Formats**: Use formats like Parquet instead of CSV. ```python import pyarrow.parquet as pq class ParquetDataset(Dataset): def read_data(self) -> pd.DataFrame: return pq.read_table(self.data_path).to_pandas() ``` - **Data Sampling**: Implement sampling methods. ```python class SampleableDataset(Dataset): def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: return self.read_data().sample(frac=fraction) ``` - **Optimize Pandas Operations**: Use efficient operations to minimize memory usage. ```python @step def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: df['new_column'] = df['column1'] + df['column2'] return df ``` #### Strategies for Medium Datasets - **Chunking for CSV Datasets**: Process large files in chunks. ```python class ChunkedCSVDataset(Dataset): def read_data(self): for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): yield chunk ``` - **Data Warehouses**: Use services like Google BigQuery for distributed processing. ```python @step def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: client = bigquery.Client() query = "SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" query_job = client.query(query) return BigQueryDataset(table_id=result_table_id) ``` #### Strategies for Very Large Datasets - **Distributed Computing Frameworks**: Use Apache Spark, Ray, or Dask for large datasets. **Apache Spark Example**: ```python from pyspark.sql import SparkSession @step def process_with_spark(input_data: str) -> None: spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() df = spark.read.format("csv").option("header", "true").load(input_data) df.groupBy("column1").agg({"column2": "mean"}).write.csv("output_path") spark.stop() ``` **Ray Example**: ```python import ray @step def process_with_ray(input_data: str) -> None: ray.init() results = ray.get([process_partition.remote(part) for part in partitions]) ray.shutdown() ``` **Dask Example**: ```python import dask.dataframe as dd @step def create_dask_dataframe(): return dd.from_pandas(pd.DataFrame({'A': range(1000)}), npartitions=4) ``` **Numba Example**: ```python from numba import jit @jit(nopython=True) def numba_function(x): return x * x + 2 * x - 1 ``` #### Important Considerations - Ensure the execution environment has necessary frameworks installed. - Manage resources effectively, especially with distributed frameworks. - Implement error handling and cleanup for Spark and Ray. - Consider data I/O methods for large datasets. #### Choosing the Right Scaling Strategy - Start with simpler strategies for smaller datasets and scale up. - Match processing complexity with the appropriate tools. - Assess infrastructure and team expertise when selecting technologies. By following these strategies, ZenML pipelines can efficiently manage datasets of any size, ensuring scalable machine learning workflows. For more details on custom Dataset classes, refer to the [custom dataset classes](datasets.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === ### Summary of ZenML Documentation on Unmaterialized Artifacts **Overview:** ZenML pipelines are data-centric, where steps are connected through their input and output artifacts. Materializers manage how artifacts are serialized and deserialized during this process. However, there are cases when you may want to skip materialization and use a reference to an artifact instead. **Warning:** Skipping materialization can lead to unintended consequences for downstream tasks. Use this feature only when necessary. **Unmaterialized Artifacts:** An unmaterialized artifact is represented by `zenml.materializers.UnmaterializedArtifact`, which includes a `uri` property pointing to the artifact's storage path. To use an unmaterialized artifact, specify `UnmaterializedArtifact` as the type in the step definition. **Example Code:** ```python from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import step, pipeline from typing_extensions import Annotated from typing import Dict, List, Tuple @step def step_1() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_2() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_3(dict_: Dict, list_: List) -> None: assert isinstance(dict_, dict) assert isinstance(list_, list) @step def step_4(dict_: UnmaterializedArtifact, list_: UnmaterializedArtifact) -> None: print(dict_.uri) print(list_.uri) @pipeline def example_pipeline(): step_3(*step_1()) step_4(*step_2()) example_pipeline() ``` **Pipeline Structure:** - `s1` and `s2` produce identical artifacts. - `s3` consumes materialized artifacts. - `s4` consumes unmaterialized artifacts, accessing their paths directly via `dict_.uri` and `list_.uri`. For further details on using `UnmaterializedArtifact`, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === It seems that the documentation text you intended to provide is missing. Please share the text you would like summarized, and I'll be happy to assist you! ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === # Summary of ZenML Artifact Loading Documentation ZenML pipelines typically consume artifacts produced by one another, but external data may also need to be integrated. For artifacts from non-ZenML sources, use `ExternalArtifact`. However, for exchanging data between ZenML pipelines, late materialization is essential. This allows passing artifacts that do not yet exist at the time of pipeline compilation. ### Key Use Cases for Artifact Exchange: 1. Grouping data products using ZenML Models. 2. Utilizing the ZenML Client to manage artifacts. **Recommendation:** Use models for grouping and accessing artifacts across pipelines. For loading artifacts from a ZenML Model, refer to the relevant documentation. ## Using Client Methods for Artifact Exchange If not using the Model Control Plane, late materialization can still facilitate data exchange. Below is a simplified version of the `do_predictions` pipeline code: ```python from typing import Annotated from zenml import step, pipeline from zenml.client import Client import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: float, model2_metric: float, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: predictions = pd.Series(model1.predict(data)) if model1_metric < model2_metric else pd.Series(model2.predict(data)) return predictions @step def load_data() -> pd.DataFrame: ... @pipeline def do_predictions(): model_42 = Client().get_artifact_version("trained_model", version="42") metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value inference_data = load_data() predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) if __name__ == "__main__": do_predictions() ``` ### Key Points: - The `predict` step compares model performance using MSE metrics. - The `load_data` step is responsible for loading inference data. - Artifact retrieval via `Client().get_artifact_version()` is executed at runtime, ensuring the latest versions are used during execution rather than at compilation. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === ### Summary of ZenML Documentation on Fetching Artifacts This documentation explains how to retrieve arbitrary artifacts in a ZenML step, emphasizing that not all artifacts must originate from direct upstream steps. #### Key Points: - Artifacts can be fetched from other upstream steps or different pipelines using the ZenML client. - The metadata guide provides additional context on how to log and track metadata. #### Example Code: ```python from zenml.client import Client from zenml import step @step def my_step(): client = Client() # Fetch an artifact by name and version output = client.get_artifact_version("my_dataset", "my_version") accuracy = output.run_metadata["accuracy"].value ``` This method allows access to pre-existing artifacts stored in the artifact store, facilitating the use of artifacts from various sources. #### Additional Resources: - [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md): Information on the `ExternalArtifact` type and artifact transfer between steps. For the latest documentation, visit the [ZenML documentation site](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === ### Summary: Using Materializers to Pass Custom Data Types in ZenML #### Overview ZenML pipelines are structured around data-centric principles, where steps are connected through their inputs and outputs. **Materializers** are key components that manage how artifacts are serialized and deserialized when stored in the artifact store. #### Built-In Materializers ZenML includes several built-in materializers for common data types, which automatically handle serialization without user intervention. Here are some examples: | Materializer | Handled Data Types | Storage Format | |--------------|---------------------|----------------| | `BuiltInMaterializer` | `bool`, `float`, `int`, `str`, `None` | `.json` | | `BytesMaterializer` | `bytes` | `.txt` | | `NumpyMaterializer` | `np.ndarray` | `.npy` | | `PandasMaterializer` | `pd.DataFrame`, `pd.Series` | `.csv` or `.gzip` | | `PydanticMaterializer` | `pydantic.BaseModel` | `.json` | #### Integration Materializers ZenML also provides integration-specific materializers that can be activated by installing the respective integration. Examples include: | Integration | Materializer | Handled Data Types | Storage Format | |-------------|--------------|---------------------|----------------| | `bentoml` | `BentoMaterializer` | `bentoml.Bento` | `.bento` | | `huggingface` | `HFDatasetMaterializer` | `datasets.Dataset` | Directory | #### Custom Materializers To use custom data types, you can define a custom materializer by subclassing `BaseMaterializer`. You need to specify `ASSOCIATED_TYPES` and implement `load()` and `save()` methods. **Example:** ```python from zenml.materializers.base_materializer import BaseMaterializer from zenml.enums import ArtifactType class MyObj: ... class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: with self.artifact_store.open('data.txt', 'r') as f: name = f.read() return MyObj(name=name) def save(self, my_obj: MyObj) -> None: with self.artifact_store.open('data.txt', 'w') as f: f.write(my_obj.name) ``` #### Configuring Steps and Pipelines You can configure which materializer to use at the step level: ```python @step(output_materializers=MyMaterializer) def my_first_step() -> MyObj: return MyObj("my_object") ``` For multiple outputs, use a dictionary: ```python @step(output_materializers={"1": MyMaterializer1, "2": MyMaterializer2}) def my_first_step() -> Tuple[Annotated[MyObj1, "1"], Annotated[MyObj2, "2"]]: return MyObj1(), MyObj2() ``` You can also define materializers globally for all pipelines: ```python materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) ``` #### Implementing Materializer Methods - **`load(data_type)`**: Reads data from the artifact store. - **`save(data)`**: Writes data to the artifact store. - **`save_visualizations(data)`**: Optionally saves visualizations. - **`extract_metadata(data)`**: Optionally extracts metadata. #### Example Pipeline Here’s how to implement a simple pipeline using a custom materializer: ```python @step def my_first_step() -> MyObj: return MyObj("my_object") @step def my_second_step(my_obj: MyObj) -> None: logging.info(f"Object passed: {my_obj.name}") @pipeline def first_pipeline(): output_1 = my_first_step() my_second_step(output_1) first_pipeline() ``` #### Conclusion Custom materializers in ZenML allow for robust handling of custom data types, ensuring that artifacts are serialized and deserialized correctly across steps in a pipeline. Proper implementation of these materializers enhances the reliability and flexibility of data workflows in ZenML. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/delete-an-artifact.md === ### Summary: Deleting Artifacts in ZenML To delete artifacts in ZenML, direct deletion is not supported due to potential database integrity issues. However, you can remove artifacts that are no longer referenced by any pipeline runs using the following command: ```shell zenml artifact prune ``` This command deletes artifacts from the artifact store and the database entry by default. You can modify this behavior with the following flags: - `--only-artifact`: Deletes only the artifact. - `--only-metadata`: Deletes only the metadata. If you encounter errors while pruning artifacts (often due to local storage issues), you can bypass these errors by adding the `--ignore-errors` flag. Note that warning messages will still be displayed during the process. For the latest documentation, refer to the [up-to-date URL](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifact-versioning.md === ### ZenML Data Storage Overview ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them through a dashboard, enhancing insights and reproducibility in machine learning workflows. #### Artifact Creation and Caching - Each pipeline run generates a new directory in the artifact store for each step, based on changes in inputs, outputs, parameters, or configurations. - If a step is new or modified, ZenML creates a unique directory structure with a new ID. If unchanged, it may cache the step to save time and resources, allowing focus on experimentation. - ZenML enables tracing artifacts back to their origins, ensuring reproducibility and identifying potential bottlenecks in pipelines. For artifact management details, refer to the [artifact versioning and configuration documentation](../../../user-guide/starter-guide/manage-artifacts.md). #### Materializers Materializers handle the serialization and deserialization of artifacts, ensuring consistent storage and retrieval. They store data in unique directories within the artifact store and can be customized for specific data types or storage systems. - ZenML provides built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. - Users can create custom materializers by extending the `BaseMaterializer` class. **Important Note:** The built-in `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions and potential security risks from executing arbitrary code. For robust applications, consider developing a custom materializer. #### Example When a pipeline runs, ZenML uses materializers to save and load artifacts via the ZenML `fileio` system, facilitating artifact caching and lineage tracking. An example of a default materializer (the `numpy` materializer) can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). For further details on materializers, refer to the [materializers documentation](handle-custom-data-types.md). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md === ### Summary of ZenML Documentation on Returning Multiple Outputs **Purpose:** The `Annotated` type in ZenML allows users to return multiple outputs from a step, each with a specific name for easy retrieval and improved dashboard readability. **Key Points:** - **Functionality:** Using `Annotated`, you can name outputs of a step, aiding in artifact retrieval and enhancing pipeline dashboard clarity. - **Example Code:** ```python from typing import Annotated, Tuple import pandas as pd from zenml import step from sklearn.model_selection import train_test_split @step def clean_data(data: pd.DataFrame) -> Tuple[ Annotated[pd.DataFrame, "x_train"], Annotated[pd.DataFrame, "x_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: x = data.drop("target", axis=1) y = data["target"] return train_test_split(x, y, test_size=0.2, random_state=42) ``` - **Functionality Breakdown:** - The `clean_data` function takes a `DataFrame` and returns a tuple containing training and testing sets for features and target variables. - Outputs are annotated for clarity, making it easier to identify them in the pipeline. This concise usage of `Annotated` enhances the organization and usability of outputs in ZenML pipelines. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md === ### ZenML Tagging Documentation Summary **Overview**: ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow and discoverability. #### Assigning Tags to Artifacts - **Python SDK**: Use the `tags` property of `ArtifactConfig` to assign tags to artifacts. ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> ( Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] ): ... ``` - **CLI**: Use the `zenml artifacts` command to tag artifacts. ```shell zenml artifacts update iris_dataset -t sklearn zenml artifacts versions update iris_dataset raw_2023 -t sklearn ``` #### Assigning Tags to Models - Tags can be added to models as key-value pairs when creating a model version. ```python from zenml.models import Model tags = ["experiment", "v1", "classification-task"] model = Model(name="iris_classifier", version="1.0.0", tags=tags) @pipeline(model=model) def my_pipeline(...): ... ``` - **Creating/Updating Models**: Use the Client to create or register models with tags. ```python from zenml.client import Client Client().create_model(name="iris_logistic_regression", tags=["classification", "iris-dataset"]) Client().create_model_version(model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"]) ``` - **CLI for Existing Models**: Use the following commands to tag existing models and versions. ```shell zenml model update iris_logistic_regression --tag "classification" zenml model version update iris_logistic_regression 2 --tag "experiment3" ``` **Note**: During pipeline runs, models may be created implicitly without tags. Users can manage tags via the SDK or ZenML Pro UI. For the latest documentation, refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifacts-naming.md === ### ZenML Artifact Naming Overview ZenML allows for flexible naming of artifacts in pipelines, which is crucial when reusing steps with different inputs. The naming convention can be static or dynamic, and ZenML uses type annotations to determine artifact names. Artifacts with the same name receive incremented version numbers. #### Naming Strategies 1. **Static Naming**: Defined directly as string literals. ```python @step def static_single() -> Annotated[str, "static_output_name"]: return "null" ``` 2. **Dynamic Naming**: - **Using Standard Placeholders**: - `{date}`: Current date (e.g., `2024_11_18`) - `{time}`: Current time (e.g., `11_07_09_326492`) ```python @step def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: return "null" ``` - **Using Custom Placeholders**: Defined via the `substitutions` parameter. ```python @step(substitutions={"custom_placeholder": "some_substitute"}) def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: return "null" ``` - **Using `with_options`**: ```python @step def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: return "my data" @pipeline def extraction_pipeline(): extract_data.with_options(substitutions={"stage": "train"})(source="s3://train") extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") ``` **Substitution Scope**: - Can be set in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. 3. **Multiple Output Handling**: Combine naming options for multiple artifacts. ```python @step def mixed_tuple() -> Tuple[ Annotated[str, "static_output_name"], Annotated[str, "name_{date}_{time}"], ]: return "static_namer", "str_namer" ``` #### Caching Behavior When caching is enabled, artifact names remain consistent across runs. Example: ```python @step(substitutions={"custom_placeholder": "resolution"}) def demo() -> Tuple[ Annotated[int, "name_{date}_{time}"], Annotated[int, "name_{custom_placeholder}"], ]: return 42, 43 @pipeline def my_pipeline(): demo() if __name__ == "__main__": run_without_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=False)() run_with_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=True)() ``` Both runs will yield consistent output artifact names, demonstrating the caching functionality. ### Summary ZenML provides a robust framework for naming artifacts in pipelines, allowing for both static and dynamic strategies, including the use of placeholders and custom substitutions. Properly managing artifact names is essential for tracking outputs effectively, especially in complex workflows. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/README.md === ### Summary of ZenML Step Outputs and Pipeline In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Utilizing type annotations for outputs enhances transparency, aids in data passing between steps, and allows for serialization/deserialization (termed 'materialize' in ZenML). #### Code Example ```python @step def load_data(parameter: int) -> Dict[str, Any]: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: Dict[str, Any]) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(parameter: int): dataset = load_data(parameter=parameter) train_model(dataset) ``` #### Key Points: - **Steps**: `load_data` retrieves training data and labels; `train_model` processes this data to train a model. - **Pipeline**: `simple_ml_pipeline` connects the two steps, passing output from `load_data` to `train_model`, illustrating data flow in ZenML. ================================================== === File: docs/book/getting-started/core-concepts.md === # ZenML Core Concepts Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines, enabling collaboration among data scientists, ML engineers, and MLOps developers. The framework is structured around three main threads: 1. **Development**: Focuses on designing ML workflows. 2. **Execution**: Utilizes MLOps tooling and infrastructure during workflow execution. 3. **Management**: Establishes and maintains efficient, production-grade solutions. ## 1. Development ### Steps - Functions marked with the `@step` decorator. - Example: ```python @step def step_1() -> str: return "world" ``` ### Pipelines - Composed of steps, defined using decorators or classes. - Example: ```python @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) ``` ### Artifacts - Represent data inputs and outputs, tracked by ZenML in an artifact store. - Produced by steps and stored after execution. ### Models - Represent outputs of training processes, including weights and metadata. ### Materializers - Define serialization/deserialization of artifacts using the `BaseMaterializer` class. ### Parameters & Settings - Steps accept parameters, which are stored by ZenML for reproducibility. ### Model Versions - A model can have multiple versions, linking all entities for centralized management. ## 2. Execution ### Stacks & Components - **Stacks**: Collections of components (orchestrators, artifact stores) for pipeline execution. - **Orchestrator**: Coordinates step execution in a pipeline. - **Artifact Store**: Houses and tracks data passing through the pipeline. ### Flavor - Base abstractions for stack components, allowing users to create custom solutions. ### Stack Switching - Easily switch between local and cloud stacks using a CLI command. ## 3. Management ### ZenML Server - Required for remote stack components and managing ZenML entities (pipelines, models). ### Server Deployment - Options include ZenML Pro SaaS or self-hosted deployment. ### Metadata Tracking - ZenML Server tracks metadata for pipeline runs, aiding in troubleshooting. ### Secrets Management - Centralized store for sensitive data, configurable with various backends (AWS, GCP, Azure). ### Collaboration - Facilitates teamwork among diverse roles in MLOps through shared resources. ### Dashboard - Visual interface for managing pipelines, stacks, and components, enhancing collaboration. ### VS Code Extension - Allows interaction with ZenML stacks and resources directly from the VS Code editor. This summary encapsulates the essential concepts and functionalities of ZenML, enabling users to understand its structure and capabilities in MLOps. ================================================== === File: docs/book/getting-started/system-architectures.md === # ZenML System Architecture Overview This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their respective components. ## ZenML OSS (Self-hosted) - **ZenML OSS Server**: A FastAPI application managing metadata for pipelines, artifacts, and stacks. - **OSS Metadata Store**: Stores all tenant metadata, including ML tracking and versioning information. - **OSS Dashboard**: A ReactJS app displaying pipelines and runs. - **Secrets Store**: Secure storage for credentials needed to access infrastructure services. In ZenML Pro, this is enhanced with additional functionality. ZenML OSS is available under the Apache 2.0 license. For deployment instructions, refer to the [deployment guide](./deploying-zenml/README.md). ## ZenML Pro (SaaS or Self-hosted) ZenML Pro enhances OSS with additional components: - **ZenML Pro Control Plane**: Central entity managing all tenants. - **Pro Dashboard**: Enhanced dashboard with additional features. - **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management data. - **Pro Add-ons**: Python modules for extended functionality. - **Identity Provider**: Supports flexible authentication via Auth0 for SaaS or custom OIDC for self-hosted setups. ZenML Pro can be deployed on various infrastructures, from SaaS to air-gapped environments. ### ZenML Pro SaaS Architecture - All ZenML services are hosted by ZenML, with customer secrets managed by the ZenML Pro Control Plane. - ML metadata is stored on ZenML infrastructure, while actual ML data artifacts reside on customer cloud storage. - A hybrid option allows customers to store secrets on their side while connecting to the ZenML server. ### ZenML Pro Self-Hosted Architecture - All services, data, and secrets are deployed on the customer's cloud, suitable for air-gapped deployments. For more information on core concepts for ZenML Pro, refer to the [core concepts guide](../getting-started/zenml-pro/core-concepts.md). Interested in ZenML Pro? [Sign up](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) for a free 14-day trial. ================================================== === File: docs/book/getting-started/installation.md === # ZenML Installation and Getting Started ## Installation **ZenML** is a Python package that can be installed via `pip`: ```shell pip install zenml ``` **Supported Python Versions:** ZenML supports **Python 3.9, 3.10, 3.11, and 3.12**. ## Dashboard Installation To use the ZenML web dashboard locally, install the optional server dependencies: ```shell pip install "zenml[server]" ``` **Recommendation:** Use a virtual environment, such as [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv). ## MacOS with Apple Silicon (M1, M2) Set the following environment variable to maintain server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is only necessary for local server usage. ## Nightly Builds Install nightly builds using: ```shell pip install zenml-nightly ``` These builds are from the latest `develop` branch and may not be stable. ## Verifying Installation Check installation success via Bash: ```bash zenml version ``` Or in Python: ```python import zenml print(zenml.__version__) ``` For more details, visit the [PyPi package page](https://pypi.org/project/zenml). ## Running with Docker ZenML is available as a Docker image. Start a bash environment with: ```shell docker run -it zenmldocker/zenml /bin/bash ``` To run the ZenML server: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` ## Deploying the Server You can run ZenML locally with: ```shell pip install "zenml[server]" zenml login --local # opens the dashboard locally ``` For advanced features, deploy a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or signing up for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. ================================================== === File: docs/book/getting-started/zenml-pro/teams.md === ### Summary of ZenML Pro Teams Documentation **Overview:** ZenML Pro introduces "Teams" to manage user groups within organizations and tenants, enhancing user management and access control in MLOps workflows. #### Key Benefits of Teams: 1. **Group Management**: Manage permissions for multiple users simultaneously. 2. **Organizational Structure**: Align teams with your company's structure. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. #### Creating and Managing Teams: - **Creation Steps**: 1. Navigate to Organization settings. 2. Click on the "Teams" tab. 3. Use "Add team" to create a new team. - **Required Information**: - Team name - Description (optional) - Initial team members #### Adding Users to Teams: 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. 3. Click "Add Members." 4. Choose users to add. #### Assigning Teams to Tenants: 1. Go to the tenant settings page. 2. Click on the "Members" tab, then the "Teams" tab. 3. Select "Add Team." 4. Choose the team and assign a role. #### Team Roles and Permissions: - Roles assigned to a team within a tenant grant all members the associated permissions (e.g., Admin, Editor, Viewer). - Example: Assigning "Editor" role to a team grants all members Editor permissions in that tenant. #### Best Practices: 1. **Reflect Your Organization**: Create teams that mirror your company's structure. 2. **Combine with Custom Roles**: Utilize custom roles for detailed access control. 3. **Regular Audits**: Periodically review team memberships and roles. 4. **Document Team Purposes**: Keep clear documentation on each team's purpose and projects. By utilizing Teams in ZenML Pro, organizations can enhance user management, streamline access control, and better organize MLOps workflows. For the latest documentation, please refer to [ZenML Documentation](https://docs.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/organization.md === # Organizations in ZenML Pro In ZenML Pro, an **Organization** is the primary structure within the ZenML Cloud environment, encompassing a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Once part of an organization, users can access all tenants they are authorized for. ## Managing Organization Settings Organization-level settings include billing information and member roles. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations Additional operations related to Organizations can be performed via the API. For more details, visit [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === # ZenML Pro Self-Hosted Deployment Guide This document outlines the installation of ZenML Pro, including the Control Plane and Tenant servers, in a Kubernetes cluster. ## Overview ZenML Pro requires access to private container images and infrastructure components: a Kubernetes cluster, a database server, load balancer, Ingress controller, HTTPS certificates, and DNS rules. Note that Single Sign-On (SSO) and Run Templates features are not available in the on-prem version. ## Preparation and Prerequisites ### Software Artifacts - **Control Plane Artifacts**: - API: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` - Dashboard: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` - Helm Chart: `oci://public.ecr.aws/zenml/zenml-pro` - **Tenant Server Artifacts**: - Server: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` - OSS Helm Chart: `oci://public.ecr.aws/zenml/zenml` - **Client Artifacts**: - Client Image: `zenmldocker/zenml` (Docker Hub) ### Accessing ZenML Pro Container Images Currently, images are available only in AWS ECR. Temporary credentials can be requested from ZenML support. #### AWS Access Steps: 1. **Create AWS Account**: Follow the AWS Free Tier page instructions. 2. **Create IAM User/Role**: Grant `AmazonEC2ContainerRegistryReadOnly` permissions. 3. **Authenticate Docker Client**: Use AWS CLI to log in to ECR. ### Air-Gapped Installation For environments without internet access: 1. **Prepare an Internet-Connected Machine**: Download required artifacts. 2. **Transfer Artifacts**: Use a USB drive or secure transfer method. 3. **Load Artifacts**: Use Docker to load images and push to an internal registry. 4. **Update Configuration**: Modify Helm values to point to the internal registry. ### Infrastructure Requirements 1. **Kubernetes Cluster**: A functional cluster is necessary. 2. **Database Server**: MySQL for Tenant servers; either MySQL or Postgres for Control Plane. 3. **Ingress Controller**: Required for HTTP(S) traffic routing. 4. **Domain Name**: An FQDN for the Control Plane and tenants. 5. **SSL Certificate**: Configure SSL termination for Ingress. ## Stage 1: Install ZenML Pro Control Plane ### Configure Helm Chart Customize the Helm chart using `values.yaml` for settings like database credentials and server URL. ### Install the Helm Chart Run the following command to install ZenML Pro: ```bash helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml ``` ### Verify Installation Check the status of the deployment: ```bash kubectl -n zenml-pro get all ``` ### Onboard Additional Users 1. Retrieve the admin password: ```bash kubectl get secret --namespace zenml-pro zenml-pro -o jsonpath="{.data.ZENML_CLOUD_ADMIN_PASSWORD}" | base64 --decode; echo ``` 2. Create a `users.yml` file with user details. 3. Use the `create_users.py` script to onboard users. ## Stage 2: Enroll and Deploy ZenML Pro Tenants ### Enroll a Tenant Run the `enroll-tenant.py` script to create a tenant entry and generate a Helm `values.yaml` file. ### Deploy the ZenML Pro Tenant Server Use the generated YAML file to deploy the tenant: ```bash helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values ``` ### Accessing the Tenant Log in as an organization member and follow the checklist to unlock the full dashboard. This guide provides a comprehensive overview of deploying ZenML Pro in a self-hosted environment, ensuring all necessary steps and configurations are covered for a successful installation. ================================================== === File: docs/book/getting-started/zenml-pro/core-concepts.md === # ZenML Pro Core Concepts ZenML Pro features a distinct entity hierarchy compared to the open-source version. Below are the key components: - **Organization**: A collection of users, teams, and tenants. - **Tenant**: An isolated ZenML server deployment containing all project resources. - **Teams**: Groups of users within an organization for resource management. - **Users**: Individual accounts on a ZenML Pro instance. - **Roles**: Define user permissions within a tenant or organization. - **Templates**: Configurable pipeline runs that can be re-executed. For detailed information, refer to the linked documents: | Concept | Description | Link | |------------------|-----------------------------------------------|---------------------| | Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | | Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | | Teams | Team management in ZenML Pro | [teams.md](./teams.md) | | Roles & Permissions | Role-based access control in ZenML Pro | [roles.md](./roles.md) | ================================================== === File: docs/book/getting-started/zenml-pro/pro-api.md === # ZenML Pro API Overview ZenML Pro provides a RESTful API for managing resources, applicable to both SaaS and self-hosted instances. The API adheres to OpenAPI 3.1.0 specifications and is accessible at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ## Key Features - **Tenant Management**: Create, list, get, and update tenants. - **Organization Management**: Manage organizations with similar operations. - **User Management**: List users, get current user details, and update user information. - **Role-Based Access Control (RBAC)**: Create roles, assign roles, and check permissions. - **Authentication**: Requires user login via the ZenML Pro interface. Programmatic access is currently unavailable. ## Important Endpoints ### Tenant Management - `GET /tenants`: List tenants - `POST /tenants`: Create a tenant - `GET /tenants/{tenant_id}`: Get tenant details - `PATCH /tenants/{tenant_id}`: Update a tenant ### Organization Management - `GET /organizations`: List organizations - `POST /organizations`: Create an organization - `GET /organizations/{organization_id}`: Get organization details - `PATCH /organizations/{organization_id}`: Update an organization ### User Management - `GET /users`: List users - `GET /users/me`: Get current user - `PATCH /users/{user_id}`: Update user ### Role-Based Access Control - `POST /roles`: Create a role - `POST /roles/{role_id}/assignments`: Assign a role - `GET /permissions`: Check permissions ## Error Handling Standard HTTP status codes are used to indicate request outcomes. Error responses include messages and additional details. ## Rate Limiting The API may enforce rate limits, returning a 429 status code for excessive requests. Implement backoff and retry logic in applications. For comprehensive details, refer to the full API documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/roles.md === # ZenML Pro: Roles and Permissions Summary ZenML Pro employs a role-based access control (RBAC) system to manage permissions for users and teams. This guide outlines the available roles, assignment procedures, and custom role creation. ## Organization-Level Roles Three predefined roles exist at the organization level: 1. **Org Admin**: Full control; can manage members, tenants, billing, and assign roles. 2. **Org Editor**: Can manage tenants and teams but lacks access to subscription info and cannot delete the organization. 3. **Org Viewer**: Read-only access to tenants. ### Assigning Organization Roles - Navigate to Organization settings > Members tab. - Update roles or use "Add members" to include new users. **Notes**: - Admins can add themselves to any tenant role. - Editors and viewers cannot add themselves to tenants they don’t belong to. - Custom organization roles can be created via the [ZenML Pro API](https://cloudapi.zenml.io/). ## Tenant-Level Roles Tenant roles dictate user permissions within a specific tenant. Predefined roles include: 1. **Admin**: Full control over tenant resources. 2. **Editor**: Can create and share resources but cannot modify or delete them. 3. **Viewer**: Read-only access. ### Custom Roles To create a custom role: 1. Access tenant settings > Roles > Add Custom Role. 2. Name the role, choose a base role, and adjust permissions. **Resources for Permissions**: - Artifacts, Models, Pipelines, etc. - Permissions: Create, Read, Update, Delete, Share. ### Managing Role Permissions 1. Go to Roles in tenant settings. 2. Select the role and click "Edit Permissions" to adjust. ## Sharing Individual Resources Users can share specific resources through the dashboard. ## Best Practices 1. **Least Privilege**: Assign minimal necessary permissions. 2. **Regular Audits**: Review role assignments periodically. 3. **Use Custom Roles**: Tailor roles for specific team needs. 4. **Document Roles**: Keep records of custom roles and their purposes. By utilizing ZenML Pro's RBAC, teams can maintain security while facilitating collaboration in MLOps projects. ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === # ZenML Pro Tenants Documentation Summary ## Overview Tenants in ZenML Pro are isolated deployments of the ZenML server, each with its own users, roles, and resources. All operations in ZenML Pro, including pipelines, stacks, and runs, are scoped to a tenant. ## Creating a Tenant To create a tenant: 1. Navigate to your organization page. 2. Click "+ New Tenant." 3. Provide a tenant name and click "Create Tenant." Alternatively, tenants can be created via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. ## Organizing Tenants ### By Development Stage - **Staging Tenants**: For development, testing, and experimentation. - **Production Tenants**: For live services, requiring stricter access controls and monitoring. ### By Business Logic - **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System). - **Team-based**: Align tenants with organizational teams (e.g., Data Science Team). - **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public Data Tenant). ### Best Practices 1. Use clear naming conventions. 2. Implement role-based access control. 3. Maintain documentation for each tenant. 4. Conduct regular reviews of tenant structure. 5. Design for scalability. ## Using Your Tenant Tenants provide access to Pro features such as: - Model Control Plane - Artifact Control Plane - Running pipelines from the Dashboard - Creating templates from pipeline runs ### Accessing Tenant Documentation Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for available methods, including pipeline execution via the REST API. For further details, refer to the ZenML documentation at [zenml.io](https://zenml.io/pro). ================================================== === File: docs/book/getting-started/zenml-pro/README.md === # ZenML Pro Overview ZenML Pro enhances the Open Source ZenML product with several key features: - **Managed Deployment**: Deploy multiple ZenML servers (tenants) for production-grade operations. - **User Management**: Create organizations and teams for scalable user management. - **Role-Based Access Control**: Implement customizable roles for secure resource management. - **Model and Artifact Control**: Utilize the Model Control Plane and Artifact Control Plane for effective tracking and management of ML assets. - **Triggers and Run Templates**: Create and run templates via the dashboard or API for quick iterations with updated configurations. - **Early Access Features**: Access pro-specific features like triggers, filters, sorting, and usage reports. For more details, visit the [ZenML website](https://zenml.io/pro). ## Deployment Scenarios ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS version simplifies deployment and management, allowing focus on MLOps workflows. For self-hosted options, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). ### Key Resources - [Tenants](./tenants.md) - [Organizations](./organization.md) - [Teams](./teams.md) - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) ================================================== === File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md === ### Custom Secret Stores in ZenML The secrets store in ZenML is responsible for managing secret values required by pipeline or stack components, while metadata is stored in an SQL database. The interface for all secret store back-ends is defined in `zenml.zen_stores.secrets_stores.secrets_store_interface`, which includes the following key methods: ```python class SecretsStoreInterface(ABC): @abstractmethod def _initialize(self) -> None: """Initialize the secrets store.""" @abstractmethod def store_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Store secret values for a new secret.""" @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: """Retrieve secret values for an existing secret.""" @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Update secret values for an existing secret.""" @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Delete secret values for an existing secret.""" ``` ### Steps to Build a Custom Secrets Store 1. **Create a Class**: Inherit from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from `SecretsStoreInterface`. Set `SecretsStoreType.CUSTOM` as the `TYPE`. 2. **Configuration Class**: If configuration is needed, inherit from `SecretsStoreConfiguration` and define your parameters. Use this as the `CONFIG_TYPE`. 3. **Configure ZenML Server**: Ensure your code is included in the ZenML server's container image. Use environment variables or helm chart values to configure the server to utilize your custom secrets store, as detailed in the deployment guide. For further details and the complete interface definition, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-docker.md === ### Summary of ZenML Docker Deployment Documentation **Overview**: This documentation provides guidance on deploying the ZenML server in a Docker container, including configuration options, local testing, and advanced deployment scenarios. #### Quick Start - **Local Deployment**: Use the ZenML CLI to quickly deploy the server locally with Docker: ```bash zenml login --local --docker ``` #### Configuration Options - **Environment Variables**: Customize the ZenML server using environment variables: - **Database Connection**: - `ZENML_STORE_URL`: Points to SQLite or MySQL database. - SQLite: `sqlite:////path/to/zenml.db` - MySQL: `mysql://username:password@host:port/database` - **SSL Options** (for MySQL): - `ZENML_STORE_SSL_CA`, `ZENML_STORE_SSL_CERT`, `ZENML_STORE_SSL_KEY`, `ZENML_STORE_SSL_VERIFY_SERVER_CERT` - **Logging**: Control verbosity with `ZENML_LOGGING_VERBOSITY`. - **Backup Strategy**: Configure with `ZENML_STORE_BACKUP_STRATEGY` (default: `in-memory`). - **Rate Limiting**: Enable with `ZENML_SERVER_RATE_LIMIT_ENABLED` and configure limits with `ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE` and `ZENML_SERVER_LOGIN_RATE_LIMIT_DAY`. #### Secrets Management - **Default Secrets Store**: Uses SQL database by default. Configure encryption with: - `ZENML_SECRETS_STORE_TYPE`: Set to `sql`. - `ZENML_SECRETS_STORE_ENCRYPTION_KEY`: A secure key for encrypting secrets. - **External Secrets Stores**: Configure for AWS, GCP, Azure, HashiCorp Vault, or custom implementations using respective environment variables. #### Running the ZenML Server - **Basic Command**: ```bash docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server ``` - **Persistent Database**: Use a mounted volume to persist the SQLite database: ```bash docker run -it -d -p 8080:8080 --name zenml \ --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ zenmldocker/zenml-server ``` - **MySQL Database**: Run a MySQL container and connect ZenML: ```bash docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 docker run -it -d -p 8080:8080 --name zenml \ --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ zenmldocker/zenml-server ``` #### Docker Compose - **Example `docker-compose.yml`**: ```yaml version: "3.9" services: mysql: image: mysql:8.0 environment: - MYSQL_ROOT_PASSWORD=password zenml: image: zenmldocker/zenml-server environment: - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml ``` - **Start with**: ```bash docker compose -p zenml up -d ``` #### Backup and Recovery - Automated backups are enabled by default. Configure backup strategy with `ZENML_STORE_BACKUP_STRATEGY` (options: `disabled`, `in-memory`, `database`, `dump-file`). #### Troubleshooting - Check logs using: ```bash docker logs zenml -f ``` or for Docker Compose: ```bash docker compose -p zenml logs -f ``` This concise summary captures the essential details for deploying and configuring the ZenML server in Docker, including environment variables, secrets management, and backup strategies. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === ### Summary: Deploying ZenML to Hugging Face Spaces **Overview**: ZenML can be quickly deployed on Hugging Face Spaces, a platform for hosting ML projects, allowing users to start with minimal infrastructure. **Important Notes**: - For production use, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to avoid data loss. - Ensure Space visibility is set to 'Public' for local machine connections. **Deployment Steps**: 1. Create a ZenML Space via the [Hugging Face link](https://huggingface.co/new-space?template=zenml/zenml). 2. Specify: - Owner (personal account or organization) - Space name - Visibility (must be 'Public' for local access) 3. Optionally select a higher-tier machine to avoid auto-shutdowns. **Customization**: - Modify the Space's appearance in the `README.md` file. - After creation, wait for the status to switch from 'Building' to 'Running'. - If the ZenML login UI is not visible, refresh the page. **Connecting to ZenML Server**: - Use the 'Direct URL' to connect: ```shell zenml login '' ``` - Access the ZenML dashboard directly via the URL. **Configuration Options**: - Default database is SQLite (non-persistent). For a persistent database, modify the `Dockerfile`. - For secrets management, use Hugging Face's 'Repository secrets' and update the ZenML server password in the Dashboard settings. **Troubleshooting**: - View logs by clicking "Open Logs" in the Space. - For support, join the [Slack channel](https://zenml.io/slack/). **Upgrading ZenML**: - The Space auto-updates to the latest ZenML version. To manually update, use 'Factory reboot' in Settings (note: this wipes data unless using a persistent MySQL database). - To use an earlier version, modify the `Dockerfile`'s `FROM` statement. For detailed configuration parameters, refer to the [Hugging Face documentation](https://huggingface.co/docs/hub/spaces-config-reference) and ZenML's [advanced server configuration options](deploy-with-docker.md#advanced-server-configuration-options). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === ### Summary of ZenML Deployment in Kubernetes with Helm **Overview**: This documentation outlines the process for deploying ZenML in a Kubernetes cluster using Helm, including prerequisites, configuration, and deployment scenarios. #### Prerequisites - **Kubernetes Cluster**: Required. - **Database**: Recommended to use a MySQL-compatible database (version 8.0 or higher) for production; defaults to SQLite if omitted. - **Tools**: - Kubernetes client (`kubectl`) - Helm - **Secrets Management**: Optional external Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager). #### Helm Configuration - Review the [`values.yaml`](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) file for customizable settings. - Collect necessary information for database and secrets management configuration. **Database Information**: - Hostname, port, username, password, and database name. - SSL certificates if using SSL. **Secrets Management Information**: - For AWS: Region, access key ID, secret access key. - For GCP: Project ID, service account with access. - For Azure: Key Vault name, tenant ID, client ID, client secret. - For HashiCorp Vault: Server URL and access token. #### Optional Cluster Services - **Ingress Service**: Recommended for exposing HTTP services; use `nginx-ingress`. - **Cert-Manager**: For managing TLS certificates. #### Helm Installation 1. **Pull the Helm Chart**: ```bash helm pull oci://public.ecr.aws/zenml/zenml --version --untar ``` 2. **Customize Values**: Create `custom-values.yaml` from `values.yaml` and modify necessary configurations (e.g., database URL, TLS settings). 3. **Install the Chart**: ```bash helm -n install zenml-server . --create-namespace --values custom-values.yaml ``` #### Post-Deployment - Activate the ZenML server via the provided URL to create an admin account. - Connect local ZenML client: ```bash zenml login https://zenml.example.com:8080 --no-verify-ssl ``` - To disconnect: ```bash zenml logout ``` #### Deployment Scenarios - **Minimal Deployment**: Uses SQLite and ClusterIP service, accessible via port-forwarding. - **Basic Deployment**: Uses an Ingress service with TLS certificates from cert-manager. #### Secrets Store Configuration - **Default**: SQL database as secrets store; configure encryption for security. - **AWS Secrets Manager**: Requires specific IAM permissions. - **GCP Secrets Manager**: Requires custom IAM roles for access. - **Azure Key Vault**: Requires service principal credentials. - **HashiCorp Vault**: Requires server URL and token. #### Backup and Recovery - Automated database backups are enabled by default before upgrades. - Backup strategies include: - `disabled` - `in-memory` - `database` - `dump-file` (with optional persistent volume). **Example Configuration for Backup**: ```yaml zenml: database: url: "mysql://admin:password@my.database.org:3306/zenml" backupStrategy: dump-file backupPVStorageSize: 1Gi podSecurityContext: fsGroup: 1000 ``` This summary encapsulates the essential details for deploying ZenML in a Kubernetes environment using Helm, ensuring that key technical information is retained while maintaining conciseness. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md === ### Summary: Deploying ZenML with Custom Docker Images This documentation outlines the process for deploying ZenML using custom Docker images, which is necessary in specific scenarios such as implementing custom artifact stores or deploying a modified ZenML server from a fork. #### Key Points: - **Default Image**: The standard `zenmlhub/zenml-server` Docker image suffices for most deployments. - **Custom Image Scenarios**: - Enabling artifact visualizations or step logs. - Deploying a forked version of ZenML with custom changes. #### Deployment Methods: Custom Docker images can only be used with Docker or Helm deployments. ### Building a Custom ZenML Server Docker Image: 1. **Set Up a Container Registry**: Create a Docker Hub account or use another registry. 2. **Clone ZenML**: Check out the desired branch, e.g., for version 0.41.0: ```bash git checkout release/0.41.0 ``` 3. **Copy Dockerfile**: ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` 4. **Modify Dockerfile**: - Add dependencies: ```bash RUN pip install ``` - For forks, install local files: ```bash RUN pip install -e .[server,secrets-aws,...] ``` 5. **Build and Push Image**: ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` ### Deploying ZenML with Custom Image: #### Via Docker: - Replace `zenmlhub/zenml-server` with your custom image in the deployment steps. - Example command to run the server: ```bash docker run -it -d -p 8080:8080 --name zenml /: ``` - Adjust `docker-compose.yml`: ```yaml services: zenml: image: /: ``` #### Via Helm: - Modify the `values.yaml` file: ```yaml zenml: image: repository: / tag: ``` For more detailed steps, refer to the ZenML Docker and Helm Deployment Guides. ================================================== === File: docs/book/getting-started/deploying-zenml/README.md === # Deploying ZenML ## Overview Deploying ZenML to a production environment provides benefits such as: 1. **Scalability**: Handles large workloads for faster results. 2. **Reliability**: Ensures high availability and fault tolerance. 3. **Collaboration**: Facilitates teamwork and model iteration. ## Components A ZenML deployment includes: - **FastAPI Server**: Backed by SQLite or MySQL. - **Python Client**: Interacts with the ZenML server. - **ReactJS Dashboard**: Open-source companion for visualization. - **(Optional)** ZenML Pro API and Dashboard. For detailed architecture, refer to the [system architecture documentation](../system-architectures.md). ### ZenML Python Client The ZenML client is a Python package for server interaction, installable via `pip`. It provides: - Command-line interface for managing stacks and secrets. - Framework for authoring and deploying pipelines. - Access to metadata through the Python SDK for custom automation. Full documentation for the Python SDK and HTTP API is available [here](https://sdkdocs.zenml.io/latest/). ## Deployment Scenarios Initially, ZenML runs locally with an SQLite database, suitable for testing core features but lacking cloud-based components. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally to enable cloud stack components and team collaboration. ## How to Deploy ZenML Deploying ZenML is essential for production-grade machine learning projects, allowing access to remote components and centralized tracking. ### Deployment Options 1. **Managed Deployment**: Use ZenML Pro for managed servers (tenants), with data securely maintained. 2. **Self-hosted Deployment**: Deploy ZenML on your infrastructure using methods like Docker, Helm, or HuggingFace Spaces. ### Deployment Documentation Refer to the following guides for deployment strategies: - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) - [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) This concise overview captures the essential details for deploying ZenML while omitting redundancy. ================================================== === File: docs/book/getting-started/deploying-zenml/secret-management.md === # ZenML Secrets Store Configuration and Management ## Overview ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata is stored in the ZenML server database, while actual secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in an SQLite database; in remote deployments, they are stored in the configured secrets management back-end. ### Supported Secrets Store Back-Ends ZenML can be configured to use various secrets store back-ends: - Default SQL database - AWS Secrets Manager - GCP Secret Manager - Azure Key Vault - HashiCorp Vault - Custom implementation ## Configuration and Deployment Secrets store configuration occurs at deployment. Choose a supported back-end and authentication method, and configure the ZenML server with the necessary credentials. Use the principle of least privilege for credentials. The configuration can be updated anytime by redeploying the server, following the documented migration strategy to ensure minimal downtime. ## Backup Secrets Store ZenML can connect to a secondary Secrets Store for high availability and disaster recovery. Ensure the backup store is in a different location than the primary store to avoid issues. The server prioritizes the primary store for read/write operations, falling back to the backup if necessary. Use the CLI commands: - `zenml secret backup`: Backs up secrets to the backup store. - `zenml secret restore`: Restores secrets from the backup store to the primary store. ## Secrets Migration Strategy To change the secrets storage provider or location, follow this migration process: 1. Set the new store as the secondary store. 2. Redeploy the server. 3. Use `zenml secret backup` to transfer secrets from the primary to the secondary store. 4. Set the secondary store as the primary and remove the old primary. 5. Redeploy the server. This strategy ensures existing secrets are migrated with minimal downtime. Migration is unnecessary if only updating credentials or authentication methods without changing the storage location. For more details on deployment strategies, refer to the ZenML deployment guide. ==================================================