Datasets:

Modalities:
Text
Formats:
text
Libraries:
Datasets
License:
wjayesh commited on
Commit
914f7ba
·
verified ·
1 Parent(s): dfbb536

Upload component-guide.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. component-guide.txt +366 -1
component-guide.txt CHANGED
@@ -1,5 +1,5 @@
1
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
- Generated by Repomix on: 2025-01-30T10:25:44.368Z
3
 
4
  ================================================================
5
  File Summary
@@ -158,6 +158,11 @@ description: Sending automated alerts to chat services.
158
  icon: message-exclamation
159
  ---
160
 
 
 
 
 
 
161
  # Alerters
162
 
163
  **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your
@@ -213,6 +218,11 @@ File: docs/book/component-guide/alerters/custom.md
213
  description: Learning how to develop a custom alerter.
214
  ---
215
 
 
 
 
 
 
216
  # Develop a Custom Alerter
217
 
218
  {% hint style="info" %}
@@ -360,6 +370,11 @@ File: docs/book/component-guide/alerters/discord.md
360
  description: Sending automated alerts to a Discord channel.
361
  ---
362
 
 
 
 
 
 
363
  # Discord Alerter
364
 
365
  The `DiscordAlerter` enables you to send messages to a dedicated Discord channel
@@ -502,6 +517,11 @@ File: docs/book/component-guide/alerters/slack.md
502
  description: Sending automated alerts to a Slack channel.
503
  ---
504
 
 
 
 
 
 
505
  # Slack Alerter
506
 
507
  The `SlackAlerter` enables you to send messages or ask questions within a
@@ -843,6 +863,11 @@ File: docs/book/component-guide/annotators/argilla.md
843
  description: Annotating data using Argilla.
844
  ---
845
 
 
 
 
 
 
846
  # Argilla
847
 
848
  [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
@@ -986,6 +1011,11 @@ File: docs/book/component-guide/annotators/custom.md
986
  description: Learning how to develop a custom annotator.
987
  ---
988
 
 
 
 
 
 
989
  # Develop a Custom Annotator
990
 
991
  {% hint style="info" %}
@@ -1009,6 +1039,11 @@ File: docs/book/component-guide/annotators/label-studio.md
1009
  description: Annotating data using Label Studio.
1010
  ---
1011
 
 
 
 
 
 
1012
  # Label Studio
1013
 
1014
  Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners.
@@ -1161,6 +1196,11 @@ File: docs/book/component-guide/annotators/pigeon.md
1161
  description: Annotating data using Pigeon.
1162
  ---
1163
 
 
 
 
 
 
1164
  # Pigeon
1165
 
1166
  Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including:
@@ -1278,6 +1318,11 @@ File: docs/book/component-guide/annotators/prodigy.md
1278
  description: Annotating data using Prodigy.
1279
  ---
1280
 
 
 
 
 
 
1281
  # Prodigy
1282
 
1283
  [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training
@@ -1417,6 +1462,11 @@ description: Setting up a persistent storage for your artifacts.
1417
  icon: folder-closed
1418
  ---
1419
 
 
 
 
 
 
1420
  # Artifact Stores
1421
 
1422
  The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored.
@@ -1589,6 +1639,11 @@ File: docs/book/component-guide/artifact-stores/azure.md
1589
  description: Storing artifacts using Azure Blob Storage
1590
  ---
1591
 
 
 
 
 
 
1592
  # Azure Blob Storage
1593
 
1594
  The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container.
@@ -1821,6 +1876,11 @@ File: docs/book/component-guide/artifact-stores/custom.md
1821
  description: Learning how to develop a custom artifact store.
1822
  ---
1823
 
 
 
 
 
 
1824
  # Develop a custom artifact store
1825
 
1826
  {% hint style="info" %}
@@ -2013,6 +2073,11 @@ File: docs/book/component-guide/artifact-stores/gcp.md
2013
  description: Storing artifacts using GCP Cloud Storage.
2014
  ---
2015
 
 
 
 
 
 
2016
  # Google Cloud Storage (GCS)
2017
 
2018
  The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket.
@@ -2217,6 +2282,11 @@ File: docs/book/component-guide/artifact-stores/local.md
2217
  description: Storing artifacts on your local filesystem.
2218
  ---
2219
 
 
 
 
 
 
2220
  # Local Artifact Store
2221
 
2222
  The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts.
@@ -2305,6 +2375,11 @@ File: docs/book/component-guide/artifact-stores/s3.md
2305
  description: Storing artifacts in an AWS S3 bucket.
2306
  ---
2307
 
 
 
 
 
 
2308
  # Amazon Simple Cloud Storage (S3)
2309
 
2310
  The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend.
@@ -2529,6 +2604,11 @@ File: docs/book/component-guide/container-registries/aws.md
2529
  description: Storing container images in Amazon ECR.
2530
  ---
2531
 
 
 
 
 
 
2532
  # Amazon Elastic Container Registry (ECR)
2533
 
2534
  The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images.
@@ -2740,6 +2820,11 @@ File: docs/book/component-guide/container-registries/azure.md
2740
  description: Storing container images in Azure.
2741
  ---
2742
 
 
 
 
 
 
2743
  # Azure Container Registry
2744
 
2745
  The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images.
@@ -2994,6 +3079,11 @@ File: docs/book/component-guide/container-registries/custom.md
2994
  description: Learning how to develop a custom container registry.
2995
  ---
2996
 
 
 
 
 
 
2997
  # Develop a custom container registry
2998
 
2999
  {% hint style="info" %}
@@ -3120,6 +3210,11 @@ File: docs/book/component-guide/container-registries/default.md
3120
  description: Storing container images locally.
3121
  ---
3122
 
 
 
 
 
 
3123
  # Default Container Registry
3124
 
3125
  The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format.
@@ -3297,6 +3392,11 @@ File: docs/book/component-guide/container-registries/dockerhub.md
3297
  description: Storing container images in DockerHub.
3298
  ---
3299
 
 
 
 
 
 
3300
  # DockerHub
3301
 
3302
  The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images.
@@ -3370,6 +3470,11 @@ File: docs/book/component-guide/container-registries/gcp.md
3370
  description: Storing container images in GCP.
3371
  ---
3372
 
 
 
 
 
 
3373
  # Google Cloud Container Registry
3374
 
3375
  The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry).
@@ -3611,6 +3716,11 @@ File: docs/book/component-guide/container-registries/github.md
3611
  description: Storing container images in GitHub.
3612
  ---
3613
 
 
 
 
 
 
3614
  # GitHub Container Registry
3615
 
3616
  The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images.
@@ -3673,6 +3783,11 @@ File: docs/book/component-guide/data-validators/custom.md
3673
  description: How to develop a custom data validator
3674
  ---
3675
 
 
 
 
 
 
3676
  # Develop a custom data validator
3677
 
3678
  {% hint style="info" %}
@@ -3802,6 +3917,11 @@ description: >-
3802
  suites
3803
  ---
3804
 
 
 
 
 
 
3805
  # Deepchecks
3806
 
3807
  The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -4226,6 +4346,11 @@ description: >-
4226
  with Evidently profiling
4227
  ---
4228
 
 
 
 
 
 
4229
  # Evidently
4230
 
4231
  The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -4863,6 +4988,11 @@ description: >-
4863
  document the results
4864
  ---
4865
 
 
 
 
 
 
4866
  # Great Expectations
4867
 
4868
  The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation.
@@ -5175,6 +5305,11 @@ description: >-
5175
  data with whylogs/WhyLabs profiling.
5176
  ---
5177
 
 
 
 
 
 
5178
  # Whylogs
5179
 
5180
  The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -5462,6 +5597,11 @@ File: docs/book/component-guide/experiment-trackers/comet.md
5462
  description: Logging and visualizing experiments with Comet.
5463
  ---
5464
 
 
 
 
 
 
5465
  # Comet
5466
 
5467
  The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
@@ -5757,6 +5897,11 @@ File: docs/book/component-guide/experiment-trackers/custom.md
5757
  description: Learning how to develop a custom experiment tracker.
5758
  ---
5759
 
 
 
 
 
 
5760
  # Develop a custom experiment tracker
5761
 
5762
  {% hint style="info" %}
@@ -5823,6 +5968,11 @@ description: Logging and visualizing ML experiments.
5823
  icon: clipboard
5824
  ---
5825
 
 
 
 
 
 
5826
  # Experiment Trackers
5827
 
5828
  Experiment trackers let you track your ML experiments by logging extended information about your models, datasets,
@@ -5916,6 +6066,11 @@ File: docs/book/component-guide/experiment-trackers/mlflow.md
5916
  description: Logging and visualizing experiments with MLflow.
5917
  ---
5918
 
 
 
 
 
 
5919
  # MLflow
5920
 
5921
  The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -6134,6 +6289,11 @@ File: docs/book/component-guide/experiment-trackers/neptune.md
6134
  description: Logging and visualizing experiments with neptune.ai
6135
  ---
6136
 
 
 
 
 
 
6137
  # Neptune
6138
 
6139
  The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -6452,6 +6612,11 @@ File: docs/book/component-guide/experiment-trackers/vertexai.md
6452
  description: Logging and visualizing experiments with Vertex AI Experiment Tracker.
6453
  ---
6454
 
 
 
 
 
 
6455
  # Vertex AI Experiment Tracker
6456
 
6457
  The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
@@ -6771,6 +6936,11 @@ File: docs/book/component-guide/experiment-trackers/wandb.md
6771
  description: Logging and visualizing experiments with Weights & Biases.
6772
  ---
6773
 
 
 
 
 
 
6774
  # Weights & Biases
6775
 
6776
  The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -7088,6 +7258,11 @@ File: docs/book/component-guide/feature-stores/custom.md
7088
  description: Learning how to develop a custom feature store.
7089
  ---
7090
 
 
 
 
 
 
7091
  # Develop a Custom Feature Store
7092
 
7093
  {% hint style="info" %}
@@ -7111,6 +7286,11 @@ File: docs/book/component-guide/feature-stores/feast.md
7111
  description: Managing data in Feast feature stores.
7112
  ---
7113
 
 
 
 
 
 
7114
  # Feast
7115
 
7116
  Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
@@ -7293,6 +7473,11 @@ File: docs/book/component-guide/image-builders/aws.md
7293
  description: Building container images with AWS CodeBuild
7294
  ---
7295
 
 
 
 
 
 
7296
  # AWS Image Builder
7297
 
7298
  The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images.
@@ -7531,6 +7716,11 @@ File: docs/book/component-guide/image-builders/custom.md
7531
  description: Learning how to develop a custom image builder.
7532
  ---
7533
 
 
 
 
 
 
7534
  # Develop a Custom Image Builder
7535
 
7536
  {% hint style="info" %}
@@ -7651,6 +7841,11 @@ File: docs/book/component-guide/image-builders/gcp.md
7651
  description: Building container images with Google Cloud Build
7652
  ---
7653
 
 
 
 
 
 
7654
  # Google Cloud Image Builder
7655
 
7656
  The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images.
@@ -7904,6 +8099,11 @@ File: docs/book/component-guide/image-builders/kaniko.md
7904
  description: Building container images with Kaniko.
7905
  ---
7906
 
 
 
 
 
 
7907
  # Kaniko Image Builder
7908
 
7909
  The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images.
@@ -8061,6 +8261,11 @@ File: docs/book/component-guide/image-builders/local.md
8061
  description: Building container images locally.
8062
  ---
8063
 
 
 
 
 
 
8064
  # Local Image Builder
8065
 
8066
  The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
@@ -8113,6 +8318,11 @@ File: docs/book/component-guide/model-deployers/bentoml.md
8113
  description: Deploying your models locally with BentoML.
8114
  ---
8115
 
 
 
 
 
 
8116
  # BentoML
8117
 
8118
  BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
@@ -8499,6 +8709,11 @@ File: docs/book/component-guide/model-deployers/custom.md
8499
  description: Learning how to develop a custom model deployer.
8500
  ---
8501
 
 
 
 
 
 
8502
  # Develop a Custom Model Deployer
8503
 
8504
  {% hint style="info" %}
@@ -8671,6 +8886,11 @@ description: >-
8671
  Deploying models to Databricks Inference Endpoints with Databricks
8672
  ---
8673
 
 
 
 
 
 
8674
  # Databricks
8675
 
8676
 
@@ -8824,6 +9044,11 @@ description: >-
8824
  :hugging_face:.
8825
  ---
8826
 
 
 
 
 
 
8827
  # Hugging Face
8828
 
8829
  Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
@@ -9016,6 +9241,11 @@ File: docs/book/component-guide/model-deployers/mlflow.md
9016
  description: Deploying your models locally with MLflow.
9017
  ---
9018
 
 
 
 
 
 
9019
  # MLflow
9020
 
9021
  The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server.
@@ -9460,6 +9690,11 @@ File: docs/book/component-guide/model-deployers/seldon.md
9460
  description: Deploying models to Kubernetes with Seldon Core.
9461
  ---
9462
 
 
 
 
 
 
9463
  # Seldon
9464
 
9465
  [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
@@ -9939,6 +10174,11 @@ File: docs/book/component-guide/model-deployers/vllm.md
9939
  description: Deploying your LLM locally with vLLM.
9940
  ---
9941
 
 
 
 
 
 
9942
  # vLLM
9943
 
9944
  [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving.
@@ -10017,6 +10257,11 @@ File: docs/book/component-guide/model-registries/custom.md
10017
  description: Learning how to develop a custom model registry.
10018
  ---
10019
 
 
 
 
 
 
10020
  # Develop a Custom Model Registry
10021
 
10022
  {% hint style="info" %}
@@ -10213,6 +10458,11 @@ File: docs/book/component-guide/model-registries/mlflow.md
10213
  description: Managing MLFlow logged models and artifacts
10214
  ---
10215
 
 
 
 
 
 
10216
  # MLflow Model Registry
10217
 
10218
  [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally.
@@ -10462,6 +10712,11 @@ File: docs/book/component-guide/orchestrators/airflow.md
10462
  description: Orchestrating your pipelines to run on Airflow.
10463
  ---
10464
 
 
 
 
 
 
10465
  # Airflow Orchestrator
10466
 
10467
  ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/)
@@ -10771,6 +11026,11 @@ File: docs/book/component-guide/orchestrators/azureml.md
10771
  description: Orchestrating your pipelines to run on AzureML.
10772
  ---
10773
 
 
 
 
 
 
10774
  # AzureML Orchestrator
10775
 
10776
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a
@@ -11009,6 +11269,11 @@ File: docs/book/component-guide/orchestrators/custom.md
11009
  description: Learning how to develop a custom orchestrator.
11010
  ---
11011
 
 
 
 
 
 
11012
  # Develop a custom orchestrator
11013
 
11014
  {% hint style="info" %}
@@ -11233,6 +11498,11 @@ File: docs/book/component-guide/orchestrators/databricks.md
11233
  description: Orchestrating your pipelines to run on Databricks.
11234
  ---
11235
 
 
 
 
 
 
11236
  # Databricks Orchestrator
11237
 
11238
  [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads.
@@ -11429,6 +11699,11 @@ File: docs/book/component-guide/orchestrators/hyperai.md
11429
  description: Orchestrating your pipelines to run on HyperAI.ai instances.
11430
  ---
11431
 
 
 
 
 
 
11432
  # HyperAI Orchestrator
11433
 
11434
  [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances.
@@ -11516,6 +11791,11 @@ File: docs/book/component-guide/orchestrators/kubeflow.md
11516
  description: Orchestrating your pipelines to run on Kubeflow.
11517
  ---
11518
 
 
 
 
 
 
11519
  # Kubeflow Orchestrator
11520
 
11521
  The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines.
@@ -11873,6 +12153,11 @@ File: docs/book/component-guide/orchestrators/kubernetes.md
11873
  description: Orchestrating your pipelines to run on Kubernetes clusters.
11874
  ---
11875
 
 
 
 
 
 
11876
  # Kubernetes Orchestrator
11877
 
11878
  Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code.
@@ -12178,6 +12463,11 @@ File: docs/book/component-guide/orchestrators/lightning.md
12178
  description: Orchestrating your pipelines to run on Lightning AI.
12179
  ---
12180
 
 
 
 
 
 
12181
 
12182
  # Lightning AI Orchestrator
12183
 
@@ -12377,6 +12667,11 @@ File: docs/book/component-guide/orchestrators/local-docker.md
12377
  description: Orchestrating your pipelines to run in Docker.
12378
  ---
12379
 
 
 
 
 
 
12380
  # Local Docker Orchestrator
12381
 
12382
  The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
@@ -12454,6 +12749,11 @@ File: docs/book/component-guide/orchestrators/local.md
12454
  description: Orchestrating your pipelines to run locally.
12455
  ---
12456
 
 
 
 
 
 
12457
  # Local Orchestrator
12458
 
12459
  The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally.
@@ -12587,6 +12887,11 @@ File: docs/book/component-guide/orchestrators/sagemaker.md
12587
  description: Orchestrating your pipelines to run on Amazon Sagemaker.
12588
  ---
12589
 
 
 
 
 
 
12590
  # AWS Sagemaker Orchestrator
12591
 
12592
  [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
@@ -12933,6 +13238,11 @@ File: docs/book/component-guide/orchestrators/skypilot-vm.md
12933
  description: Orchestrating your pipelines to run on VMs using SkyPilot.
12934
  ---
12935
 
 
 
 
 
 
12936
  # Skypilot VM Orchestrator
12937
 
12938
  The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
@@ -13455,6 +13765,11 @@ File: docs/book/component-guide/orchestrators/tekton.md
13455
  description: Orchestrating your pipelines to run on Tekton.
13456
  ---
13457
 
 
 
 
 
 
13458
  # Tekton Orchestrator
13459
 
13460
  [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.
@@ -13695,6 +14010,11 @@ File: docs/book/component-guide/orchestrators/vertex.md
13695
  description: Orchestrating your pipelines to run on Vertex AI.
13696
  ---
13697
 
 
 
 
 
 
13698
  # Google Cloud VertexAI Orchestrator
13699
 
13700
  [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
@@ -13995,6 +14315,11 @@ File: docs/book/component-guide/step-operators/azureml.md
13995
  description: Executing individual steps in AzureML.
13996
  ---
13997
 
 
 
 
 
 
13998
  # AzureML
13999
 
14000
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
@@ -14156,6 +14481,11 @@ File: docs/book/component-guide/step-operators/custom.md
14156
  description: Learning how to develop a custom step operator.
14157
  ---
14158
 
 
 
 
 
 
14159
  # Develop a Custom Step Operator
14160
 
14161
  {% hint style="info" %}
@@ -14285,6 +14615,11 @@ File: docs/book/component-guide/step-operators/kubernetes.md
14285
  description: Executing individual steps in Kubernetes Pods.
14286
  ---
14287
 
 
 
 
 
 
14288
  # Kubernetes Step Operator
14289
 
14290
  ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods.
@@ -14519,6 +14854,11 @@ File: docs/book/component-guide/step-operators/modal.md
14519
  description: Executing individual steps in Modal.
14520
  ---
14521
 
 
 
 
 
 
14522
  # Modal Step Operator
14523
 
14524
  [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances.
@@ -14636,6 +14976,11 @@ File: docs/book/component-guide/step-operators/sagemaker.md
14636
  description: Executing individual steps in SageMaker.
14637
  ---
14638
 
 
 
 
 
 
14639
  # Amazon SageMaker
14640
 
14641
  [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
@@ -15001,6 +15346,11 @@ roleRef:
15001
  name: edit
15002
  apiGroup: rbac.authorization.k8s.io
15003
  ---
 
 
 
 
 
15004
  ```
15005
 
15006
  And then execute the following command to create the resources:
@@ -15169,6 +15519,11 @@ File: docs/book/component-guide/step-operators/vertex.md
15169
  description: Executing individual steps in Vertex AI.
15170
  ---
15171
 
 
 
 
 
 
15172
  # Google Cloud VertexAI
15173
 
15174
  [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
@@ -15311,6 +15666,11 @@ File: docs/book/component-guide/component-guide.md
15311
  description: Overview of categories of MLOps components.
15312
  ---
15313
 
 
 
 
 
 
15314
  # 📜 Overview
15315
 
15316
  If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner.
@@ -15349,6 +15709,11 @@ File: docs/book/component-guide/integration-overview.md
15349
  description: Overview of third-party ZenML integrations.
15350
  ---
15351
 
 
 
 
 
 
15352
  # Integration overview
15353
 
15354
  Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas.
 
1
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
+ Generated by Repomix on: 2025-02-06T16:56:07.786Z
3
 
4
  ================================================================
5
  File Summary
 
158
  icon: message-exclamation
159
  ---
160
 
161
+ {% hint style="warning" %}
162
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
163
+ {% endhint %}
164
+
165
+
166
  # Alerters
167
 
168
  **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your
 
218
  description: Learning how to develop a custom alerter.
219
  ---
220
 
221
+ {% hint style="warning" %}
222
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
223
+ {% endhint %}
224
+
225
+
226
  # Develop a Custom Alerter
227
 
228
  {% hint style="info" %}
 
370
  description: Sending automated alerts to a Discord channel.
371
  ---
372
 
373
+ {% hint style="warning" %}
374
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
375
+ {% endhint %}
376
+
377
+
378
  # Discord Alerter
379
 
380
  The `DiscordAlerter` enables you to send messages to a dedicated Discord channel
 
517
  description: Sending automated alerts to a Slack channel.
518
  ---
519
 
520
+ {% hint style="warning" %}
521
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
522
+ {% endhint %}
523
+
524
+
525
  # Slack Alerter
526
 
527
  The `SlackAlerter` enables you to send messages or ask questions within a
 
863
  description: Annotating data using Argilla.
864
  ---
865
 
866
+ {% hint style="warning" %}
867
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
868
+ {% endhint %}
869
+
870
+
871
  # Argilla
872
 
873
  [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
 
1011
  description: Learning how to develop a custom annotator.
1012
  ---
1013
 
1014
+ {% hint style="warning" %}
1015
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1016
+ {% endhint %}
1017
+
1018
+
1019
  # Develop a Custom Annotator
1020
 
1021
  {% hint style="info" %}
 
1039
  description: Annotating data using Label Studio.
1040
  ---
1041
 
1042
+ {% hint style="warning" %}
1043
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1044
+ {% endhint %}
1045
+
1046
+
1047
  # Label Studio
1048
 
1049
  Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners.
 
1196
  description: Annotating data using Pigeon.
1197
  ---
1198
 
1199
+ {% hint style="warning" %}
1200
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1201
+ {% endhint %}
1202
+
1203
+
1204
  # Pigeon
1205
 
1206
  Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including:
 
1318
  description: Annotating data using Prodigy.
1319
  ---
1320
 
1321
+ {% hint style="warning" %}
1322
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1323
+ {% endhint %}
1324
+
1325
+
1326
  # Prodigy
1327
 
1328
  [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training
 
1462
  icon: folder-closed
1463
  ---
1464
 
1465
+ {% hint style="warning" %}
1466
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1467
+ {% endhint %}
1468
+
1469
+
1470
  # Artifact Stores
1471
 
1472
  The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored.
 
1639
  description: Storing artifacts using Azure Blob Storage
1640
  ---
1641
 
1642
+ {% hint style="warning" %}
1643
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1644
+ {% endhint %}
1645
+
1646
+
1647
  # Azure Blob Storage
1648
 
1649
  The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container.
 
1876
  description: Learning how to develop a custom artifact store.
1877
  ---
1878
 
1879
+ {% hint style="warning" %}
1880
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1881
+ {% endhint %}
1882
+
1883
+
1884
  # Develop a custom artifact store
1885
 
1886
  {% hint style="info" %}
 
2073
  description: Storing artifacts using GCP Cloud Storage.
2074
  ---
2075
 
2076
+ {% hint style="warning" %}
2077
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2078
+ {% endhint %}
2079
+
2080
+
2081
  # Google Cloud Storage (GCS)
2082
 
2083
  The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket.
 
2282
  description: Storing artifacts on your local filesystem.
2283
  ---
2284
 
2285
+ {% hint style="warning" %}
2286
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2287
+ {% endhint %}
2288
+
2289
+
2290
  # Local Artifact Store
2291
 
2292
  The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts.
 
2375
  description: Storing artifacts in an AWS S3 bucket.
2376
  ---
2377
 
2378
+ {% hint style="warning" %}
2379
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2380
+ {% endhint %}
2381
+
2382
+
2383
  # Amazon Simple Cloud Storage (S3)
2384
 
2385
  The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend.
 
2604
  description: Storing container images in Amazon ECR.
2605
  ---
2606
 
2607
+ {% hint style="warning" %}
2608
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2609
+ {% endhint %}
2610
+
2611
+
2612
  # Amazon Elastic Container Registry (ECR)
2613
 
2614
  The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images.
 
2820
  description: Storing container images in Azure.
2821
  ---
2822
 
2823
+ {% hint style="warning" %}
2824
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2825
+ {% endhint %}
2826
+
2827
+
2828
  # Azure Container Registry
2829
 
2830
  The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images.
 
3079
  description: Learning how to develop a custom container registry.
3080
  ---
3081
 
3082
+ {% hint style="warning" %}
3083
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3084
+ {% endhint %}
3085
+
3086
+
3087
  # Develop a custom container registry
3088
 
3089
  {% hint style="info" %}
 
3210
  description: Storing container images locally.
3211
  ---
3212
 
3213
+ {% hint style="warning" %}
3214
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3215
+ {% endhint %}
3216
+
3217
+
3218
  # Default Container Registry
3219
 
3220
  The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format.
 
3392
  description: Storing container images in DockerHub.
3393
  ---
3394
 
3395
+ {% hint style="warning" %}
3396
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3397
+ {% endhint %}
3398
+
3399
+
3400
  # DockerHub
3401
 
3402
  The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images.
 
3470
  description: Storing container images in GCP.
3471
  ---
3472
 
3473
+ {% hint style="warning" %}
3474
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3475
+ {% endhint %}
3476
+
3477
+
3478
  # Google Cloud Container Registry
3479
 
3480
  The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry).
 
3716
  description: Storing container images in GitHub.
3717
  ---
3718
 
3719
+ {% hint style="warning" %}
3720
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3721
+ {% endhint %}
3722
+
3723
+
3724
  # GitHub Container Registry
3725
 
3726
  The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images.
 
3783
  description: How to develop a custom data validator
3784
  ---
3785
 
3786
+ {% hint style="warning" %}
3787
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3788
+ {% endhint %}
3789
+
3790
+
3791
  # Develop a custom data validator
3792
 
3793
  {% hint style="info" %}
 
3917
  suites
3918
  ---
3919
 
3920
+ {% hint style="warning" %}
3921
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3922
+ {% endhint %}
3923
+
3924
+
3925
  # Deepchecks
3926
 
3927
  The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
 
4346
  with Evidently profiling
4347
  ---
4348
 
4349
+ {% hint style="warning" %}
4350
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4351
+ {% endhint %}
4352
+
4353
+
4354
  # Evidently
4355
 
4356
  The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
 
4988
  document the results
4989
  ---
4990
 
4991
+ {% hint style="warning" %}
4992
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4993
+ {% endhint %}
4994
+
4995
+
4996
  # Great Expectations
4997
 
4998
  The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation.
 
5305
  data with whylogs/WhyLabs profiling.
5306
  ---
5307
 
5308
+ {% hint style="warning" %}
5309
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5310
+ {% endhint %}
5311
+
5312
+
5313
  # Whylogs
5314
 
5315
  The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
 
5597
  description: Logging and visualizing experiments with Comet.
5598
  ---
5599
 
5600
+ {% hint style="warning" %}
5601
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5602
+ {% endhint %}
5603
+
5604
+
5605
  # Comet
5606
 
5607
  The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
 
5897
  description: Learning how to develop a custom experiment tracker.
5898
  ---
5899
 
5900
+ {% hint style="warning" %}
5901
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5902
+ {% endhint %}
5903
+
5904
+
5905
  # Develop a custom experiment tracker
5906
 
5907
  {% hint style="info" %}
 
5968
  icon: clipboard
5969
  ---
5970
 
5971
+ {% hint style="warning" %}
5972
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5973
+ {% endhint %}
5974
+
5975
+
5976
  # Experiment Trackers
5977
 
5978
  Experiment trackers let you track your ML experiments by logging extended information about your models, datasets,
 
6066
  description: Logging and visualizing experiments with MLflow.
6067
  ---
6068
 
6069
+ {% hint style="warning" %}
6070
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6071
+ {% endhint %}
6072
+
6073
+
6074
  # MLflow
6075
 
6076
  The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
6289
  description: Logging and visualizing experiments with neptune.ai
6290
  ---
6291
 
6292
+ {% hint style="warning" %}
6293
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6294
+ {% endhint %}
6295
+
6296
+
6297
  # Neptune
6298
 
6299
  The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
6612
  description: Logging and visualizing experiments with Vertex AI Experiment Tracker.
6613
  ---
6614
 
6615
+ {% hint style="warning" %}
6616
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6617
+ {% endhint %}
6618
+
6619
+
6620
  # Vertex AI Experiment Tracker
6621
 
6622
  The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
 
6936
  description: Logging and visualizing experiments with Weights & Biases.
6937
  ---
6938
 
6939
+ {% hint style="warning" %}
6940
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6941
+ {% endhint %}
6942
+
6943
+
6944
  # Weights & Biases
6945
 
6946
  The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
7258
  description: Learning how to develop a custom feature store.
7259
  ---
7260
 
7261
+ {% hint style="warning" %}
7262
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7263
+ {% endhint %}
7264
+
7265
+
7266
  # Develop a Custom Feature Store
7267
 
7268
  {% hint style="info" %}
 
7286
  description: Managing data in Feast feature stores.
7287
  ---
7288
 
7289
+ {% hint style="warning" %}
7290
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7291
+ {% endhint %}
7292
+
7293
+
7294
  # Feast
7295
 
7296
  Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
 
7473
  description: Building container images with AWS CodeBuild
7474
  ---
7475
 
7476
+ {% hint style="warning" %}
7477
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7478
+ {% endhint %}
7479
+
7480
+
7481
  # AWS Image Builder
7482
 
7483
  The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images.
 
7716
  description: Learning how to develop a custom image builder.
7717
  ---
7718
 
7719
+ {% hint style="warning" %}
7720
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7721
+ {% endhint %}
7722
+
7723
+
7724
  # Develop a Custom Image Builder
7725
 
7726
  {% hint style="info" %}
 
7841
  description: Building container images with Google Cloud Build
7842
  ---
7843
 
7844
+ {% hint style="warning" %}
7845
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7846
+ {% endhint %}
7847
+
7848
+
7849
  # Google Cloud Image Builder
7850
 
7851
  The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images.
 
8099
  description: Building container images with Kaniko.
8100
  ---
8101
 
8102
+ {% hint style="warning" %}
8103
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8104
+ {% endhint %}
8105
+
8106
+
8107
  # Kaniko Image Builder
8108
 
8109
  The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images.
 
8261
  description: Building container images locally.
8262
  ---
8263
 
8264
+ {% hint style="warning" %}
8265
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8266
+ {% endhint %}
8267
+
8268
+
8269
  # Local Image Builder
8270
 
8271
  The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
 
8318
  description: Deploying your models locally with BentoML.
8319
  ---
8320
 
8321
+ {% hint style="warning" %}
8322
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8323
+ {% endhint %}
8324
+
8325
+
8326
  # BentoML
8327
 
8328
  BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
 
8709
  description: Learning how to develop a custom model deployer.
8710
  ---
8711
 
8712
+ {% hint style="warning" %}
8713
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8714
+ {% endhint %}
8715
+
8716
+
8717
  # Develop a Custom Model Deployer
8718
 
8719
  {% hint style="info" %}
 
8886
  Deploying models to Databricks Inference Endpoints with Databricks
8887
  ---
8888
 
8889
+ {% hint style="warning" %}
8890
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8891
+ {% endhint %}
8892
+
8893
+
8894
  # Databricks
8895
 
8896
 
 
9044
  :hugging_face:.
9045
  ---
9046
 
9047
+ {% hint style="warning" %}
9048
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9049
+ {% endhint %}
9050
+
9051
+
9052
  # Hugging Face
9053
 
9054
  Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
 
9241
  description: Deploying your models locally with MLflow.
9242
  ---
9243
 
9244
+ {% hint style="warning" %}
9245
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9246
+ {% endhint %}
9247
+
9248
+
9249
  # MLflow
9250
 
9251
  The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server.
 
9690
  description: Deploying models to Kubernetes with Seldon Core.
9691
  ---
9692
 
9693
+ {% hint style="warning" %}
9694
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9695
+ {% endhint %}
9696
+
9697
+
9698
  # Seldon
9699
 
9700
  [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
 
10174
  description: Deploying your LLM locally with vLLM.
10175
  ---
10176
 
10177
+ {% hint style="warning" %}
10178
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10179
+ {% endhint %}
10180
+
10181
+
10182
  # vLLM
10183
 
10184
  [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving.
 
10257
  description: Learning how to develop a custom model registry.
10258
  ---
10259
 
10260
+ {% hint style="warning" %}
10261
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10262
+ {% endhint %}
10263
+
10264
+
10265
  # Develop a Custom Model Registry
10266
 
10267
  {% hint style="info" %}
 
10458
  description: Managing MLFlow logged models and artifacts
10459
  ---
10460
 
10461
+ {% hint style="warning" %}
10462
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10463
+ {% endhint %}
10464
+
10465
+
10466
  # MLflow Model Registry
10467
 
10468
  [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally.
 
10712
  description: Orchestrating your pipelines to run on Airflow.
10713
  ---
10714
 
10715
+ {% hint style="warning" %}
10716
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10717
+ {% endhint %}
10718
+
10719
+
10720
  # Airflow Orchestrator
10721
 
10722
  ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/)
 
11026
  description: Orchestrating your pipelines to run on AzureML.
11027
  ---
11028
 
11029
+ {% hint style="warning" %}
11030
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11031
+ {% endhint %}
11032
+
11033
+
11034
  # AzureML Orchestrator
11035
 
11036
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a
 
11269
  description: Learning how to develop a custom orchestrator.
11270
  ---
11271
 
11272
+ {% hint style="warning" %}
11273
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11274
+ {% endhint %}
11275
+
11276
+
11277
  # Develop a custom orchestrator
11278
 
11279
  {% hint style="info" %}
 
11498
  description: Orchestrating your pipelines to run on Databricks.
11499
  ---
11500
 
11501
+ {% hint style="warning" %}
11502
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11503
+ {% endhint %}
11504
+
11505
+
11506
  # Databricks Orchestrator
11507
 
11508
  [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads.
 
11699
  description: Orchestrating your pipelines to run on HyperAI.ai instances.
11700
  ---
11701
 
11702
+ {% hint style="warning" %}
11703
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11704
+ {% endhint %}
11705
+
11706
+
11707
  # HyperAI Orchestrator
11708
 
11709
  [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances.
 
11791
  description: Orchestrating your pipelines to run on Kubeflow.
11792
  ---
11793
 
11794
+ {% hint style="warning" %}
11795
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11796
+ {% endhint %}
11797
+
11798
+
11799
  # Kubeflow Orchestrator
11800
 
11801
  The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines.
 
12153
  description: Orchestrating your pipelines to run on Kubernetes clusters.
12154
  ---
12155
 
12156
+ {% hint style="warning" %}
12157
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12158
+ {% endhint %}
12159
+
12160
+
12161
  # Kubernetes Orchestrator
12162
 
12163
  Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code.
 
12463
  description: Orchestrating your pipelines to run on Lightning AI.
12464
  ---
12465
 
12466
+ {% hint style="warning" %}
12467
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12468
+ {% endhint %}
12469
+
12470
+
12471
 
12472
  # Lightning AI Orchestrator
12473
 
 
12667
  description: Orchestrating your pipelines to run in Docker.
12668
  ---
12669
 
12670
+ {% hint style="warning" %}
12671
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12672
+ {% endhint %}
12673
+
12674
+
12675
  # Local Docker Orchestrator
12676
 
12677
  The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
 
12749
  description: Orchestrating your pipelines to run locally.
12750
  ---
12751
 
12752
+ {% hint style="warning" %}
12753
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12754
+ {% endhint %}
12755
+
12756
+
12757
  # Local Orchestrator
12758
 
12759
  The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally.
 
12887
  description: Orchestrating your pipelines to run on Amazon Sagemaker.
12888
  ---
12889
 
12890
+ {% hint style="warning" %}
12891
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12892
+ {% endhint %}
12893
+
12894
+
12895
  # AWS Sagemaker Orchestrator
12896
 
12897
  [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
 
13238
  description: Orchestrating your pipelines to run on VMs using SkyPilot.
13239
  ---
13240
 
13241
+ {% hint style="warning" %}
13242
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
13243
+ {% endhint %}
13244
+
13245
+
13246
  # Skypilot VM Orchestrator
13247
 
13248
  The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
 
13765
  description: Orchestrating your pipelines to run on Tekton.
13766
  ---
13767
 
13768
+ {% hint style="warning" %}
13769
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
13770
+ {% endhint %}
13771
+
13772
+
13773
  # Tekton Orchestrator
13774
 
13775
  [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.
 
14010
  description: Orchestrating your pipelines to run on Vertex AI.
14011
  ---
14012
 
14013
+ {% hint style="warning" %}
14014
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14015
+ {% endhint %}
14016
+
14017
+
14018
  # Google Cloud VertexAI Orchestrator
14019
 
14020
  [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
 
14315
  description: Executing individual steps in AzureML.
14316
  ---
14317
 
14318
+ {% hint style="warning" %}
14319
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14320
+ {% endhint %}
14321
+
14322
+
14323
  # AzureML
14324
 
14325
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
 
14481
  description: Learning how to develop a custom step operator.
14482
  ---
14483
 
14484
+ {% hint style="warning" %}
14485
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14486
+ {% endhint %}
14487
+
14488
+
14489
  # Develop a Custom Step Operator
14490
 
14491
  {% hint style="info" %}
 
14615
  description: Executing individual steps in Kubernetes Pods.
14616
  ---
14617
 
14618
+ {% hint style="warning" %}
14619
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14620
+ {% endhint %}
14621
+
14622
+
14623
  # Kubernetes Step Operator
14624
 
14625
  ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods.
 
14854
  description: Executing individual steps in Modal.
14855
  ---
14856
 
14857
+ {% hint style="warning" %}
14858
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14859
+ {% endhint %}
14860
+
14861
+
14862
  # Modal Step Operator
14863
 
14864
  [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances.
 
14976
  description: Executing individual steps in SageMaker.
14977
  ---
14978
 
14979
+ {% hint style="warning" %}
14980
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14981
+ {% endhint %}
14982
+
14983
+
14984
  # Amazon SageMaker
14985
 
14986
  [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
 
15346
  name: edit
15347
  apiGroup: rbac.authorization.k8s.io
15348
  ---
15349
+
15350
+ {% hint style="warning" %}
15351
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15352
+ {% endhint %}
15353
+
15354
  ```
15355
 
15356
  And then execute the following command to create the resources:
 
15519
  description: Executing individual steps in Vertex AI.
15520
  ---
15521
 
15522
+ {% hint style="warning" %}
15523
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15524
+ {% endhint %}
15525
+
15526
+
15527
  # Google Cloud VertexAI
15528
 
15529
  [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
 
15666
  description: Overview of categories of MLOps components.
15667
  ---
15668
 
15669
+ {% hint style="warning" %}
15670
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15671
+ {% endhint %}
15672
+
15673
+
15674
  # 📜 Overview
15675
 
15676
  If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner.
 
15709
  description: Overview of third-party ZenML integrations.
15710
  ---
15711
 
15712
+ {% hint style="warning" %}
15713
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15714
+ {% endhint %}
15715
+
15716
+
15717
  # Integration overview
15718
 
15719
  Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas.