Datasets:

Modalities:
Text
Formats:
text
Libraries:
Datasets
License:
wjayesh commited on
Commit
d7a1916
·
verified ·
1 Parent(s): 914f7ba

Upload basics.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. basics.txt +242 -2
basics.txt CHANGED
@@ -1,5 +1,5 @@
1
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
- Generated by Repomix on: 2025-01-30T10:25:45.743Z
3
 
4
  ================================================================
5
  File Summary
@@ -117,6 +117,11 @@ File: docs/book/user-guide/cloud-guide/cloud-guide.md
117
  description: Taking your ZenML workflow to the next level.
118
  ---
119
 
 
 
 
 
 
120
  # ☁️ Cloud guide
121
 
122
  This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md).
@@ -138,6 +143,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md
138
  description: Learn how to implement evaluation for RAG in just 65 lines of code.
139
  ---
140
 
 
 
 
 
 
141
  # Evaluation in 65 lines of code
142
 
143
  Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work.
@@ -230,6 +240,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md
230
  description: Learn how to evaluate the performance of your RAG system in practice.
231
  ---
232
 
 
 
 
 
 
233
  # Evaluation in practice
234
 
235
  Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice.
@@ -279,6 +294,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/generation.md
279
  description: Evaluate the generation component of your RAG pipeline.
280
  ---
281
 
 
 
 
 
 
282
  # Generation evaluation
283
 
284
  Now that we have a sense of how to evaluate the retrieval component of our RAG
@@ -677,6 +697,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/README.md
677
  description: Track how your RAG pipeline improves using evaluation and metrics.
678
  ---
679
 
 
 
 
 
 
680
  # Evaluation and metrics
681
 
682
  In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively.
@@ -715,6 +740,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md
715
  description: See how the retrieval component responds to changes in the pipeline.
716
  ---
717
 
 
 
 
 
 
718
  # Retrieval evaluation
719
 
720
  The retrieval component of our RAG pipeline is responsible for finding relevant
@@ -1064,6 +1094,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetun
1064
  description: Evaluate finetuned embeddings and compare to original base embeddings.
1065
  ---
1066
 
 
 
 
 
 
1067
  Now that we've finetuned our embeddings, we can evaluate them and compare to the
1068
  base embeddings. We have all the data saved and versioned already, and we will
1069
  reuse the same MatryoshkaLoss function for evaluation.
@@ -1204,6 +1239,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddi
1204
  description: Finetune embeddings with Sentence Transformers.
1205
  ---
1206
 
 
 
 
 
 
1207
  We now have a dataset that we can use to finetune our embeddings. You can
1208
  [inspect the positive and negative examples](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) on the Hugging Face [datasets page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) since
1209
  our previous pipeline pushed the data there.
@@ -1308,6 +1348,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddi
1308
  description: Finetune embeddings on custom synthetic data to improve retrieval performance.
1309
  ---
1310
 
 
 
 
 
 
1311
  We previously learned [how to use RAG with ZenML](../rag-with-zenml/README.md) to
1312
  build a production-ready RAG pipeline. In this section, we will explore how to
1313
  optimize and maintain your embedding models through synthetic data generation and
@@ -1355,6 +1400,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-gen
1355
  description: Generate synthetic data with distilabel to finetune embeddings.
1356
  ---
1357
 
 
 
 
 
 
1358
  We already have [a dataset of technical documentation](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) that was generated
1359
  previously while we were working on the RAG pipeline. We'll use this dataset
1360
  to generate synthetic data with `distilabel`. You can inspect the data directly
@@ -1900,6 +1950,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md
1900
  description: Learn how to implement an LLM fine-tuning pipeline in just 100 lines of code.
1901
  ---
1902
 
 
 
 
 
 
1903
  # Quick Start: Fine-tuning an LLM
1904
 
1905
  There's a lot to understand about LLM fine-tuning - from choosing the right base model to preparing your dataset and selecting training parameters. But let's start with a concrete implementation to see how it works in practice. The following 100 lines of code demonstrate:
@@ -2118,6 +2173,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md
2118
  description: Finetune LLMs for specific tasks or to improve performance and cost.
2119
  ---
2120
 
 
 
 
 
 
2121
  So far in our LLMOps journey we've learned [how to use RAG with
2122
  ZenML](../rag-with-zenml/README.md), how to [evaluate our RAG
2123
  systems](../evaluation/README.md), how to [use reranking to improve retrieval](../reranking/README.md), and how to
@@ -2167,6 +2227,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelera
2167
  description: "Finetuning an LLM with Accelerate and PEFT"
2168
  ---
2169
 
 
 
 
 
 
2170
  # Finetuning an LLM with Accelerate and PEFT
2171
 
2172
  We're finally ready to get our hands on the code and see how it works. In this
@@ -2420,6 +2485,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-fine
2420
  description: Get started with finetuning LLMs by picking a use case and data.
2421
  ---
2422
 
 
 
 
 
 
2423
  # Starter choices for finetuning LLMs
2424
 
2425
  Finetuning large language models can be a powerful way to tailor their
@@ -2590,6 +2660,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune
2590
  description: Deciding when is the right time to finetune LLMs.
2591
  ---
2592
 
 
 
 
 
 
2593
  # Why and when to finetune LLMs
2594
 
2595
  This guide is intended to be a practical overview that gets you started with
@@ -2678,6 +2753,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipel
2678
  description: Use your RAG components to generate responses to prompts.
2679
  ---
2680
 
 
 
 
 
 
2681
  # Simple RAG Inference
2682
 
2683
  Now that we have our index store, we can use it to make queries based on the
@@ -2842,6 +2922,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md
2842
  description: Understand how to ingest and preprocess data for RAG pipelines with ZenML.
2843
  ---
2844
 
 
 
 
 
 
2845
  The first step in setting up a RAG pipeline is to ingest the data that will be
2846
  used to train and evaluate the retriever and generator models. This data can
2847
  include a large corpus of documents, as well as any relevant metadata or
@@ -3018,6 +3103,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md
3018
  description: Generate embeddings to improve retrieval performance.
3019
  ---
3020
 
 
 
 
 
 
3021
  # Generating Embeddings for Retrieval
3022
 
3023
  In this section, we'll explore how to generate embeddings for your data to
@@ -3233,6 +3323,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md
3233
  description: Learn how to implement a RAG pipeline in just 85 lines of code.
3234
  ---
3235
 
 
 
 
 
 
3236
  There's a lot of theory and context to think about when it comes to RAG, but
3237
  let's start with a quick implementation in code to motivate what follows. The
3238
  following 85 lines do the following:
@@ -3374,6 +3469,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md
3374
  description: RAG is a sensible way to get started with LLMs.
3375
  ---
3376
 
 
 
 
 
 
3377
  # RAG Pipelines with ZenML
3378
 
3379
  Retrieval-Augmented Generation (RAG) is a powerful technique that combines the
@@ -3414,6 +3514,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-v
3414
  description: Store embeddings in a vector database for efficient retrieval.
3415
  ---
3416
 
 
 
 
 
 
3417
  # Storing embeddings in a vector database
3418
 
3419
  The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query.
@@ -3550,6 +3655,11 @@ description: >-
3550
  benefits.
3551
  ---
3552
 
 
 
 
 
 
3553
  # Understanding Retrieval-Augmented Generation (RAG)
3554
 
3555
  LLMs are powerful but not without their limitations. They are prone to generating incorrect responses, especially when it's unclear what the input prompt is asking for. They are also limited in the amount of text they can understand and generate. While some LLMs can handle more than 1 million tokens of input, most open-source models can handle far less. Your use case also might not require all the complexity and cost associated with running a large LLM.
@@ -3603,6 +3713,11 @@ File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performan
3603
  description: Evaluate the performance of your reranking model.
3604
  ---
3605
 
 
 
 
 
 
3606
  # Evaluating reranking performance
3607
 
3608
  We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML.
@@ -3830,6 +3945,11 @@ File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md
3830
  description: Learn how to implement reranking in ZenML.
3831
  ---
3832
 
 
 
 
 
 
3833
  # Implementing Reranking in ZenML
3834
 
3835
  We already have a working RAG pipeline, so inserting a reranker into the
@@ -3988,6 +4108,11 @@ File: docs/book/user-guide/llmops-guide/reranking/README.md
3988
  description: Add reranking to your RAG inference for better retrieval performance.
3989
  ---
3990
 
 
 
 
 
 
3991
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
3992
  improve the quality of the retrieved documents by reordering them based on
3993
  additional features or scores. In this section, we'll explore how to add a
@@ -4017,6 +4142,11 @@ File: docs/book/user-guide/llmops-guide/reranking/reranking.md
4017
  description: Add reranking to your RAG inference for better retrieval performance.
4018
  ---
4019
 
 
 
 
 
 
4020
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
4021
  improve the quality of the retrieved documents by reordering them based on
4022
  additional features or scores. In this section, we'll explore how to add a
@@ -4046,6 +4176,11 @@ File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md
4046
  description: Understand how reranking works.
4047
  ---
4048
 
 
 
 
 
 
4049
  ## What is reranking?
4050
 
4051
  Reranking is the process of refining the initial ranking of documents retrieved
@@ -4176,6 +4311,11 @@ description: >-
4176
  Delivery
4177
  ---
4178
 
 
 
 
 
 
4179
  # Set up CI/CD
4180
 
4181
  Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in
@@ -4327,6 +4467,11 @@ File: docs/book/user-guide/production-guide/cloud-orchestration.md
4327
  description: Orchestrate using cloud resources.
4328
  ---
4329
 
 
 
 
 
 
4330
  # Orchestrate on the cloud
4331
 
4332
  Until now, we've only run pipelines locally. The next step is to get free from our local machines and transition our pipelines to execute on the cloud. This will enable you to run your MLOps pipelines in a cloud environment, leveraging the scalability and robustness that cloud platforms offer.
@@ -4515,6 +4660,11 @@ File: docs/book/user-guide/production-guide/configure-pipeline.md
4515
  description: Add more resources to your pipeline configuration.
4516
  ---
4517
 
 
 
 
 
 
4518
  # Configure your pipeline to add compute
4519
 
4520
  Now that we have our pipeline up and running in the cloud, you might be wondering how ZenML figured out what sort of dependencies to install in the Docker image that we just ran on the VM. The answer lies in the [runner script we executed (i.e. run.py)](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/run.py#L215), in particular, these lines:
@@ -4685,6 +4835,11 @@ description: >-
4685
  MLOps projects.
4686
  ---
4687
 
 
 
 
 
 
4688
  # Configure a code repository
4689
 
4690
  Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always wait for a Docker build every time after running a pipeline (even if the local Docker cache is used). However, there is a way to just have one pipeline build and keep reusing it until a change to the pipeline environment is made: by connecting a code repository.
@@ -4793,6 +4948,11 @@ File: docs/book/user-guide/production-guide/deploying-zenml.md
4793
  description: Deploying ZenML is the first step to production.
4794
  ---
4795
 
 
 
 
 
 
4796
  # Deploying ZenML
4797
 
4798
  When you first get started with ZenML, it is based on the following architecture on your machine:
@@ -4867,6 +5027,11 @@ File: docs/book/user-guide/production-guide/end-to-end.md
4867
  description: Put your new knowledge in action with an end-to-end project
4868
  ---
4869
 
 
 
 
 
 
4870
  # An end-to-end project
4871
 
4872
  That was awesome! We learned so many advanced MLOps production concepts:
@@ -4965,6 +5130,11 @@ File: docs/book/user-guide/production-guide/remote-storage.md
4965
  description: Transitioning to remote artifact storage.
4966
  ---
4967
 
 
 
 
 
 
4968
  # Connecting remote storage
4969
 
4970
  In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage!
@@ -5187,6 +5357,11 @@ File: docs/book/user-guide/production-guide/understand-stacks.md
5187
  description: Learning how to switch the infrastructure backend of your code.
5188
  ---
5189
 
 
 
 
 
 
5190
  # Understanding stacks
5191
 
5192
  Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running [your first pipelines](../starter-guide/create-an-ml-pipeline.md), you might have already noticed the term `stack` in the logs and on the dashboard.
@@ -5415,6 +5590,11 @@ File: docs/book/user-guide/starter-guide/cache-previous-executions.md
5415
  description: Iterating quickly with ZenML through caching.
5416
  ---
5417
 
 
 
 
 
 
5418
  # Cache previous executions
5419
 
5420
  Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching.
@@ -5599,6 +5779,11 @@ File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md
5599
  description: Start with the basics of steps and pipelines.
5600
  ---
5601
 
 
 
 
 
 
5602
  # Create an ML pipeline
5603
 
5604
  In the quest for production-ready ML models, workflows can quickly become complex. Decoupling and standardizing stages such as data ingestion, preprocessing, and model evaluation allows for more manageable, reusable, and scalable processes. ZenML pipelines facilitate this by enabling each stage—represented as **Steps**—to be modularly developed and then integrated smoothly into an end-to-end **Pipeline**.
@@ -5939,6 +6124,11 @@ File: docs/book/user-guide/starter-guide/manage-artifacts.md
5939
  description: Understand and adjust how ZenML versions your data.
5940
  ---
5941
 
 
 
 
 
 
5942
  # Manage artifacts
5943
 
5944
  Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifact—be it data, models, or evaluations—is automatically tracked and versioned upon pipeline execution.
@@ -6575,6 +6765,11 @@ File: docs/book/user-guide/starter-guide/starter-project.md
6575
  description: Put your new knowledge into action with a simple starter project
6576
  ---
6577
 
 
 
 
 
 
6578
  # A starter project
6579
 
6580
  By now, you have understood some of the basic pillars of a MLOps system:
@@ -6645,6 +6840,11 @@ File: docs/book/user-guide/starter-guide/track-ml-models.md
6645
  description: Creating a full picture of a ML model using the Model Control Plane
6646
  ---
6647
 
 
 
 
 
 
6648
  # Track ML models
6649
 
6650
  ![Walkthrough of ZenML Model Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/mcp_walkthrough.gif)
@@ -6910,7 +7110,7 @@ ZenML Model and versions are some of the most powerful features in ZenML. To und
6910
 
6911
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
6912
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
6913
- Generated by Repomix on: 2025-01-30T10:25:46.732Z
6914
 
6915
  ================================================================
6916
  File Summary
@@ -6990,6 +7190,11 @@ File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md
6990
  description: Learning how to develop a custom secret store.
6991
  ---
6992
 
 
 
 
 
 
6993
  # Custom secret stores
6994
 
6995
  The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting _only the secrets values_ for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` core module and looks more or less like this:
@@ -7096,6 +7301,11 @@ File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.
7096
  description: Deploying ZenML to Huggingface Spaces.
7097
  ---
7098
 
 
 
 
 
 
7099
  # Deploy using HuggingFace Spaces
7100
 
7101
  A quick way to deploy ZenML and get started is to use [HuggingFace Spaces](https://huggingface.co/spaces). HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead.
@@ -7175,6 +7385,11 @@ File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md
7175
  description: Deploying ZenML with custom Docker images.
7176
  ---
7177
 
 
 
 
 
 
7178
  # Deploy with custom images
7179
 
7180
  In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image:
@@ -7373,6 +7588,11 @@ File: docs/book/getting-started/deploying-zenml/secret-management.md
7373
  description: Configuring the secrets store.
7374
  ---
7375
 
 
 
 
 
 
7376
  # Secret store configuration and management
7377
 
7378
  ## Centralized secrets store
@@ -7508,6 +7728,11 @@ description: >
7508
  Learn how to use the ZenML Pro API.
7509
  ---
7510
 
 
 
 
 
 
7511
  # Using the ZenML Pro API
7512
 
7513
  ZenML Pro offers a powerful API that allows you to interact with your ZenML resources. Whether you're using the [SaaS version](https://cloud.zenml.io) or a self-hosted ZenML Pro instance, you can leverage this API to manage tenants, organizations, users, roles, and more.
@@ -7629,6 +7854,11 @@ description: >
7629
  Learn about the different roles and permissions you can assign to your team members in ZenML Pro.
7630
  ---
7631
 
 
 
 
 
 
7632
  # ZenML Pro: Roles and Permissions
7633
 
7634
  ZenML Pro offers a robust role-based access control (RBAC) system to manage permissions across your organization and tenants. This guide will help you understand the different roles available, how to assign them, and how to create custom roles tailored to your team's needs.
@@ -7771,6 +8001,11 @@ description: >
7771
  Learn about Teams in ZenML Pro and how they can be used to manage groups of users across your organization and tenants.
7772
  ---
7773
 
 
 
 
 
 
7774
  # Organize users in Teams
7775
 
7776
  ZenML Pro introduces the concept of Teams to help you manage groups of users efficiently. A team is a collection of users that acts as a single entity within your organization and tenants. This guide will help you understand how teams work, how to create and manage them, and how to use them effectively in your MLOps workflows.
@@ -7850,6 +8085,11 @@ description: >
7850
  Learn how to use tenants in ZenML Pro.
7851
  ---
7852
 
 
 
 
 
 
7853
  # Tenants
7854
 
7855
  Tenants are individual, isolated deployments of the ZenML server. Each tenant has its own set of users, roles, and resources. Essentially, everything you do in ZenML Pro revolves around a tenant: all of your pipelines, stacks, runs, connectors and so on are scoped to a tenant.
 
1
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
+ Generated by Repomix on: 2025-02-06T16:56:09.144Z
3
 
4
  ================================================================
5
  File Summary
 
117
  description: Taking your ZenML workflow to the next level.
118
  ---
119
 
120
+ {% hint style="warning" %}
121
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
122
+ {% endhint %}
123
+
124
+
125
  # ☁️ Cloud guide
126
 
127
  This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md).
 
143
  description: Learn how to implement evaluation for RAG in just 65 lines of code.
144
  ---
145
 
146
+ {% hint style="warning" %}
147
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
148
+ {% endhint %}
149
+
150
+
151
  # Evaluation in 65 lines of code
152
 
153
  Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work.
 
240
  description: Learn how to evaluate the performance of your RAG system in practice.
241
  ---
242
 
243
+ {% hint style="warning" %}
244
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
245
+ {% endhint %}
246
+
247
+
248
  # Evaluation in practice
249
 
250
  Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice.
 
294
  description: Evaluate the generation component of your RAG pipeline.
295
  ---
296
 
297
+ {% hint style="warning" %}
298
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
299
+ {% endhint %}
300
+
301
+
302
  # Generation evaluation
303
 
304
  Now that we have a sense of how to evaluate the retrieval component of our RAG
 
697
  description: Track how your RAG pipeline improves using evaluation and metrics.
698
  ---
699
 
700
+ {% hint style="warning" %}
701
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
702
+ {% endhint %}
703
+
704
+
705
  # Evaluation and metrics
706
 
707
  In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively.
 
740
  description: See how the retrieval component responds to changes in the pipeline.
741
  ---
742
 
743
+ {% hint style="warning" %}
744
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
745
+ {% endhint %}
746
+
747
+
748
  # Retrieval evaluation
749
 
750
  The retrieval component of our RAG pipeline is responsible for finding relevant
 
1094
  description: Evaluate finetuned embeddings and compare to original base embeddings.
1095
  ---
1096
 
1097
+ {% hint style="warning" %}
1098
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1099
+ {% endhint %}
1100
+
1101
+
1102
  Now that we've finetuned our embeddings, we can evaluate them and compare to the
1103
  base embeddings. We have all the data saved and versioned already, and we will
1104
  reuse the same MatryoshkaLoss function for evaluation.
 
1239
  description: Finetune embeddings with Sentence Transformers.
1240
  ---
1241
 
1242
+ {% hint style="warning" %}
1243
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1244
+ {% endhint %}
1245
+
1246
+
1247
  We now have a dataset that we can use to finetune our embeddings. You can
1248
  [inspect the positive and negative examples](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) on the Hugging Face [datasets page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) since
1249
  our previous pipeline pushed the data there.
 
1348
  description: Finetune embeddings on custom synthetic data to improve retrieval performance.
1349
  ---
1350
 
1351
+ {% hint style="warning" %}
1352
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1353
+ {% endhint %}
1354
+
1355
+
1356
  We previously learned [how to use RAG with ZenML](../rag-with-zenml/README.md) to
1357
  build a production-ready RAG pipeline. In this section, we will explore how to
1358
  optimize and maintain your embedding models through synthetic data generation and
 
1400
  description: Generate synthetic data with distilabel to finetune embeddings.
1401
  ---
1402
 
1403
+ {% hint style="warning" %}
1404
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1405
+ {% endhint %}
1406
+
1407
+
1408
  We already have [a dataset of technical documentation](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) that was generated
1409
  previously while we were working on the RAG pipeline. We'll use this dataset
1410
  to generate synthetic data with `distilabel`. You can inspect the data directly
 
1950
  description: Learn how to implement an LLM fine-tuning pipeline in just 100 lines of code.
1951
  ---
1952
 
1953
+ {% hint style="warning" %}
1954
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1955
+ {% endhint %}
1956
+
1957
+
1958
  # Quick Start: Fine-tuning an LLM
1959
 
1960
  There's a lot to understand about LLM fine-tuning - from choosing the right base model to preparing your dataset and selecting training parameters. But let's start with a concrete implementation to see how it works in practice. The following 100 lines of code demonstrate:
 
2173
  description: Finetune LLMs for specific tasks or to improve performance and cost.
2174
  ---
2175
 
2176
+ {% hint style="warning" %}
2177
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2178
+ {% endhint %}
2179
+
2180
+
2181
  So far in our LLMOps journey we've learned [how to use RAG with
2182
  ZenML](../rag-with-zenml/README.md), how to [evaluate our RAG
2183
  systems](../evaluation/README.md), how to [use reranking to improve retrieval](../reranking/README.md), and how to
 
2227
  description: "Finetuning an LLM with Accelerate and PEFT"
2228
  ---
2229
 
2230
+ {% hint style="warning" %}
2231
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2232
+ {% endhint %}
2233
+
2234
+
2235
  # Finetuning an LLM with Accelerate and PEFT
2236
 
2237
  We're finally ready to get our hands on the code and see how it works. In this
 
2485
  description: Get started with finetuning LLMs by picking a use case and data.
2486
  ---
2487
 
2488
+ {% hint style="warning" %}
2489
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2490
+ {% endhint %}
2491
+
2492
+
2493
  # Starter choices for finetuning LLMs
2494
 
2495
  Finetuning large language models can be a powerful way to tailor their
 
2660
  description: Deciding when is the right time to finetune LLMs.
2661
  ---
2662
 
2663
+ {% hint style="warning" %}
2664
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2665
+ {% endhint %}
2666
+
2667
+
2668
  # Why and when to finetune LLMs
2669
 
2670
  This guide is intended to be a practical overview that gets you started with
 
2753
  description: Use your RAG components to generate responses to prompts.
2754
  ---
2755
 
2756
+ {% hint style="warning" %}
2757
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2758
+ {% endhint %}
2759
+
2760
+
2761
  # Simple RAG Inference
2762
 
2763
  Now that we have our index store, we can use it to make queries based on the
 
2922
  description: Understand how to ingest and preprocess data for RAG pipelines with ZenML.
2923
  ---
2924
 
2925
+ {% hint style="warning" %}
2926
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2927
+ {% endhint %}
2928
+
2929
+
2930
  The first step in setting up a RAG pipeline is to ingest the data that will be
2931
  used to train and evaluate the retriever and generator models. This data can
2932
  include a large corpus of documents, as well as any relevant metadata or
 
3103
  description: Generate embeddings to improve retrieval performance.
3104
  ---
3105
 
3106
+ {% hint style="warning" %}
3107
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3108
+ {% endhint %}
3109
+
3110
+
3111
  # Generating Embeddings for Retrieval
3112
 
3113
  In this section, we'll explore how to generate embeddings for your data to
 
3323
  description: Learn how to implement a RAG pipeline in just 85 lines of code.
3324
  ---
3325
 
3326
+ {% hint style="warning" %}
3327
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3328
+ {% endhint %}
3329
+
3330
+
3331
  There's a lot of theory and context to think about when it comes to RAG, but
3332
  let's start with a quick implementation in code to motivate what follows. The
3333
  following 85 lines do the following:
 
3469
  description: RAG is a sensible way to get started with LLMs.
3470
  ---
3471
 
3472
+ {% hint style="warning" %}
3473
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3474
+ {% endhint %}
3475
+
3476
+
3477
  # RAG Pipelines with ZenML
3478
 
3479
  Retrieval-Augmented Generation (RAG) is a powerful technique that combines the
 
3514
  description: Store embeddings in a vector database for efficient retrieval.
3515
  ---
3516
 
3517
+ {% hint style="warning" %}
3518
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3519
+ {% endhint %}
3520
+
3521
+
3522
  # Storing embeddings in a vector database
3523
 
3524
  The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query.
 
3655
  benefits.
3656
  ---
3657
 
3658
+ {% hint style="warning" %}
3659
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3660
+ {% endhint %}
3661
+
3662
+
3663
  # Understanding Retrieval-Augmented Generation (RAG)
3664
 
3665
  LLMs are powerful but not without their limitations. They are prone to generating incorrect responses, especially when it's unclear what the input prompt is asking for. They are also limited in the amount of text they can understand and generate. While some LLMs can handle more than 1 million tokens of input, most open-source models can handle far less. Your use case also might not require all the complexity and cost associated with running a large LLM.
 
3713
  description: Evaluate the performance of your reranking model.
3714
  ---
3715
 
3716
+ {% hint style="warning" %}
3717
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3718
+ {% endhint %}
3719
+
3720
+
3721
  # Evaluating reranking performance
3722
 
3723
  We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML.
 
3945
  description: Learn how to implement reranking in ZenML.
3946
  ---
3947
 
3948
+ {% hint style="warning" %}
3949
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3950
+ {% endhint %}
3951
+
3952
+
3953
  # Implementing Reranking in ZenML
3954
 
3955
  We already have a working RAG pipeline, so inserting a reranker into the
 
4108
  description: Add reranking to your RAG inference for better retrieval performance.
4109
  ---
4110
 
4111
+ {% hint style="warning" %}
4112
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4113
+ {% endhint %}
4114
+
4115
+
4116
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
4117
  improve the quality of the retrieved documents by reordering them based on
4118
  additional features or scores. In this section, we'll explore how to add a
 
4142
  description: Add reranking to your RAG inference for better retrieval performance.
4143
  ---
4144
 
4145
+ {% hint style="warning" %}
4146
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4147
+ {% endhint %}
4148
+
4149
+
4150
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
4151
  improve the quality of the retrieved documents by reordering them based on
4152
  additional features or scores. In this section, we'll explore how to add a
 
4176
  description: Understand how reranking works.
4177
  ---
4178
 
4179
+ {% hint style="warning" %}
4180
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4181
+ {% endhint %}
4182
+
4183
+
4184
  ## What is reranking?
4185
 
4186
  Reranking is the process of refining the initial ranking of documents retrieved
 
4311
  Delivery
4312
  ---
4313
 
4314
+ {% hint style="warning" %}
4315
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4316
+ {% endhint %}
4317
+
4318
+
4319
  # Set up CI/CD
4320
 
4321
  Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in
 
4467
  description: Orchestrate using cloud resources.
4468
  ---
4469
 
4470
+ {% hint style="warning" %}
4471
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4472
+ {% endhint %}
4473
+
4474
+
4475
  # Orchestrate on the cloud
4476
 
4477
  Until now, we've only run pipelines locally. The next step is to get free from our local machines and transition our pipelines to execute on the cloud. This will enable you to run your MLOps pipelines in a cloud environment, leveraging the scalability and robustness that cloud platforms offer.
 
4660
  description: Add more resources to your pipeline configuration.
4661
  ---
4662
 
4663
+ {% hint style="warning" %}
4664
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4665
+ {% endhint %}
4666
+
4667
+
4668
  # Configure your pipeline to add compute
4669
 
4670
  Now that we have our pipeline up and running in the cloud, you might be wondering how ZenML figured out what sort of dependencies to install in the Docker image that we just ran on the VM. The answer lies in the [runner script we executed (i.e. run.py)](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/run.py#L215), in particular, these lines:
 
4835
  MLOps projects.
4836
  ---
4837
 
4838
+ {% hint style="warning" %}
4839
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4840
+ {% endhint %}
4841
+
4842
+
4843
  # Configure a code repository
4844
 
4845
  Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always wait for a Docker build every time after running a pipeline (even if the local Docker cache is used). However, there is a way to just have one pipeline build and keep reusing it until a change to the pipeline environment is made: by connecting a code repository.
 
4948
  description: Deploying ZenML is the first step to production.
4949
  ---
4950
 
4951
+ {% hint style="warning" %}
4952
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4953
+ {% endhint %}
4954
+
4955
+
4956
  # Deploying ZenML
4957
 
4958
  When you first get started with ZenML, it is based on the following architecture on your machine:
 
5027
  description: Put your new knowledge in action with an end-to-end project
5028
  ---
5029
 
5030
+ {% hint style="warning" %}
5031
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5032
+ {% endhint %}
5033
+
5034
+
5035
  # An end-to-end project
5036
 
5037
  That was awesome! We learned so many advanced MLOps production concepts:
 
5130
  description: Transitioning to remote artifact storage.
5131
  ---
5132
 
5133
+ {% hint style="warning" %}
5134
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5135
+ {% endhint %}
5136
+
5137
+
5138
  # Connecting remote storage
5139
 
5140
  In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage!
 
5357
  description: Learning how to switch the infrastructure backend of your code.
5358
  ---
5359
 
5360
+ {% hint style="warning" %}
5361
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5362
+ {% endhint %}
5363
+
5364
+
5365
  # Understanding stacks
5366
 
5367
  Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running [your first pipelines](../starter-guide/create-an-ml-pipeline.md), you might have already noticed the term `stack` in the logs and on the dashboard.
 
5590
  description: Iterating quickly with ZenML through caching.
5591
  ---
5592
 
5593
+ {% hint style="warning" %}
5594
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5595
+ {% endhint %}
5596
+
5597
+
5598
  # Cache previous executions
5599
 
5600
  Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching.
 
5779
  description: Start with the basics of steps and pipelines.
5780
  ---
5781
 
5782
+ {% hint style="warning" %}
5783
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5784
+ {% endhint %}
5785
+
5786
+
5787
  # Create an ML pipeline
5788
 
5789
  In the quest for production-ready ML models, workflows can quickly become complex. Decoupling and standardizing stages such as data ingestion, preprocessing, and model evaluation allows for more manageable, reusable, and scalable processes. ZenML pipelines facilitate this by enabling each stage—represented as **Steps**—to be modularly developed and then integrated smoothly into an end-to-end **Pipeline**.
 
6124
  description: Understand and adjust how ZenML versions your data.
6125
  ---
6126
 
6127
+ {% hint style="warning" %}
6128
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6129
+ {% endhint %}
6130
+
6131
+
6132
  # Manage artifacts
6133
 
6134
  Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifact—be it data, models, or evaluations—is automatically tracked and versioned upon pipeline execution.
 
6765
  description: Put your new knowledge into action with a simple starter project
6766
  ---
6767
 
6768
+ {% hint style="warning" %}
6769
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6770
+ {% endhint %}
6771
+
6772
+
6773
  # A starter project
6774
 
6775
  By now, you have understood some of the basic pillars of a MLOps system:
 
6840
  description: Creating a full picture of a ML model using the Model Control Plane
6841
  ---
6842
 
6843
+ {% hint style="warning" %}
6844
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6845
+ {% endhint %}
6846
+
6847
+
6848
  # Track ML models
6849
 
6850
  ![Walkthrough of ZenML Model Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/mcp_walkthrough.gif)
 
7110
 
7111
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
7112
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
7113
+ Generated by Repomix on: 2025-02-06T16:56:10.199Z
7114
 
7115
  ================================================================
7116
  File Summary
 
7190
  description: Learning how to develop a custom secret store.
7191
  ---
7192
 
7193
+ {% hint style="warning" %}
7194
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7195
+ {% endhint %}
7196
+
7197
+
7198
  # Custom secret stores
7199
 
7200
  The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting _only the secrets values_ for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` core module and looks more or less like this:
 
7301
  description: Deploying ZenML to Huggingface Spaces.
7302
  ---
7303
 
7304
+ {% hint style="warning" %}
7305
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7306
+ {% endhint %}
7307
+
7308
+
7309
  # Deploy using HuggingFace Spaces
7310
 
7311
  A quick way to deploy ZenML and get started is to use [HuggingFace Spaces](https://huggingface.co/spaces). HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead.
 
7385
  description: Deploying ZenML with custom Docker images.
7386
  ---
7387
 
7388
+ {% hint style="warning" %}
7389
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7390
+ {% endhint %}
7391
+
7392
+
7393
  # Deploy with custom images
7394
 
7395
  In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image:
 
7588
  description: Configuring the secrets store.
7589
  ---
7590
 
7591
+ {% hint style="warning" %}
7592
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7593
+ {% endhint %}
7594
+
7595
+
7596
  # Secret store configuration and management
7597
 
7598
  ## Centralized secrets store
 
7728
  Learn how to use the ZenML Pro API.
7729
  ---
7730
 
7731
+ {% hint style="warning" %}
7732
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7733
+ {% endhint %}
7734
+
7735
+
7736
  # Using the ZenML Pro API
7737
 
7738
  ZenML Pro offers a powerful API that allows you to interact with your ZenML resources. Whether you're using the [SaaS version](https://cloud.zenml.io) or a self-hosted ZenML Pro instance, you can leverage this API to manage tenants, organizations, users, roles, and more.
 
7854
  Learn about the different roles and permissions you can assign to your team members in ZenML Pro.
7855
  ---
7856
 
7857
+ {% hint style="warning" %}
7858
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7859
+ {% endhint %}
7860
+
7861
+
7862
  # ZenML Pro: Roles and Permissions
7863
 
7864
  ZenML Pro offers a robust role-based access control (RBAC) system to manage permissions across your organization and tenants. This guide will help you understand the different roles available, how to assign them, and how to create custom roles tailored to your team's needs.
 
8001
  Learn about Teams in ZenML Pro and how they can be used to manage groups of users across your organization and tenants.
8002
  ---
8003
 
8004
+ {% hint style="warning" %}
8005
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8006
+ {% endhint %}
8007
+
8008
+
8009
  # Organize users in Teams
8010
 
8011
  ZenML Pro introduces the concept of Teams to help you manage groups of users efficiently. A team is a collection of users that acts as a single entity within your organization and tenants. This guide will help you understand how teams work, how to create and manage them, and how to use them effectively in your MLOps workflows.
 
8085
  Learn how to use tenants in ZenML Pro.
8086
  ---
8087
 
8088
+ {% hint style="warning" %}
8089
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8090
+ {% endhint %}
8091
+
8092
+
8093
  # Tenants
8094
 
8095
  Tenants are individual, isolated deployments of the ZenML server. Each tenant has its own set of users, roles, and resources. Essentially, everything you do in ZenML Pro revolves around a tenant: all of your pipelines, stacks, runs, connectors and so on are scoped to a tenant.