This file is a merged representation of the entire codebase, combining all repository files into a single document. Generated by Repomix on: 2025-02-06T16:56:09.144Z ================================================================ File Summary ================================================================ Purpose: -------- This file contains a packed representation of the entire repository's contents. It is designed to be easily consumable by AI systems for analysis, code review, or other automated processes. File Format: ------------ The content is organized as follows: 1. This summary section 2. Repository information 3. Directory structure 4. Multiple file entries, each consisting of: a. A separator line (================) b. The file path (File: path/to/file) c. Another separator line d. The full contents of the file e. A blank line Usage Guidelines: ----------------- - This file should be treated as read-only. Any changes should be made to the original repository files, not this packed version. - When processing this file, use the file path to distinguish between different files in the repository. - Be aware that this file may contain sensitive information. Handle it with the same level of security as you would the original repository. Notes: ------ - Some files may have been excluded based on .gitignore rules and Repomix's configuration. - Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files. Additional Info: ---------------- ================================================================ Directory Structure ================================================================ docs/ book/ user-guide/ cloud-guide/ cloud-guide.md llmops-guide/ evaluation/ evaluation-in-65-loc.md evaluation-in-practice.md generation.md README.md retrieval.md finetuning-embeddings/ evaluating-finetuned-embeddings.md finetuning-embeddings-with-sentence-transformers.md finetuning-embeddings.md synthetic-data-generation.md finetuning-llms/ deploying-finetuned-models.md evaluation-for-finetuning.md finetuning-100-loc.md finetuning-llms.md finetuning-with-accelerate.md next-steps.md starter-choices-for-finetuning-llms.md why-and-when-to-finetune-llms.md rag-with-zenml/ basic-rag-inference-pipeline.md data-ingestion.md embeddings-generation.md rag-85-loc.md README.md storing-embeddings-in-a-vector-database.md understanding-rag.md reranking/ evaluating-reranking-performance.md implementing-reranking.md README.md reranking.md understanding-reranking.md README.md production-guide/ ci-cd.md cloud-orchestration.md configure-pipeline.md connect-code-repository.md deploying-zenml.md end-to-end.md README.md remote-storage.md understand-stacks.md starter-guide/ cache-previous-executions.md create-an-ml-pipeline.md manage-artifacts.md README.md starter-project.md track-ml-models.md ================================================================ Files ================================================================ ================ File: docs/book/user-guide/cloud-guide/cloud-guide.md ================ --- description: Taking your ZenML workflow to the next level. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # ☁️ Cloud guide This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md). A `stack` is the configuration of tools and infrastructure that your pipelines can run on. When you run a pipeline, ZenML performs a different series of actions depending on the stack.

ZenML is the translation layer that allows your code to run on any of your stacks

Note, this guide focuses on the *registering* a stack, meaning that the resources required to run pipelines have already been *provisioned*. In order to provision the underlying infrastructure, you can either do so manually, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md), the [stack registration wizard](../../how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md), or [the ZenML Terraform modules](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md ================ --- description: Learn how to implement evaluation for RAG in just 65 lines of code. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Evaluation in 65 lines of code Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work. ```python # ...previous RAG pipeline code here... # see https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_rag_pipeline.py eval_data = [ { "question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots.", }, { "question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World.", }, { "question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World.", }, ] def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) score = any( any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks ) return score def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[ { "role": "system", "content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, your task is to determine if the generated answer is relevant and accurate. Respond with 'YES' if the generated answer is satisfactory, or 'NO' if it is not.", }, { "role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?", }, ], model="gpt-3.5-turbo", ) judgment = chat_completion.choices[0].message.content.strip().lower() return judgment == "yes" retrieval_scores = [] generation_scores = [] for item in eval_data: retrieval_score = evaluate_retrieval( item["question"], item["expected_answer"], corpus ) retrieval_scores.append(retrieval_score) generated_answer = answer_question(item["question"], corpus) generation_score = evaluate_generation( item["question"], item["expected_answer"], generated_answer ) generation_scores.append(generation_score) retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` As you can see, we've added two evaluation functions: `evaluate_retrieval` and `evaluate_generation`. The `evaluate_retrieval` function checks if the retrieved chunks contain any words from the expected answer. The `evaluate_generation` function uses OpenAI's chat completion LLM to evaluate the quality of the generated answer. We then loop through the evaluation data, which contains questions and expected answers, and evaluate the retrieval and generation components of our RAG pipeline. Finally, we calculate the accuracy of both components and print the results: ![](../../../.gitbook/assets/evaluation-65-loc.png) As you can see, we get 100% accuracy for both retrieval and generation in this example. Not bad! The sections that follow will provide a more detailed and sophisticated implementation of RAG evaluation, but this example shows how you can think about it at a high level!
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md ================ --- description: Learn how to evaluate the performance of your RAG system in practice. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Evaluation in practice Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice. Our example project includes the evaluation as a separate pipeline that optionally runs after the main pipeline that generates and populates the embeddings. This is a good practice to follow, as it allows you to separate the concerns of generating the embeddings and evaluating them. Depending on the specific use case, the evaluations could be included as part of the main pipeline and used as a gating mechanism to determine whether the embeddings are good enough to be used in production. Given some of the performance constraints of the LLM judge, it might be worth experimenting with using a local LLM judge for evaluation during the course of the development process and then running the full evaluation using a cloud LLM like Anthropic's Claude or OpenAI's GPT-3.5 or 4. This can help you iterate faster and get a sense of how well your embeddings are performing before committing to the cost of running the full evaluation. ## Automated evaluation isn't a silver bullet While automating the evaluation process can save you time and effort, it's important to remember that it doesn't replace the need for a human to review the results. The LLM judge is expensive to run, and it takes time to get the results back. Automating the evaluation process can help you focus on the details and the data, but it doesn't replace the need for a human to review the results and make sure that the embeddings (and the RAG system as a whole) are performing as expected. ## When and how much to evaluate The frequency and depth of evaluation will depend on your specific use case and the constraints of your project. In an ideal world, you would evaluate the performance of your embeddings and the RAG system as a whole as often as possible, but in practice, you'll need to balance the cost of running the evaluation with the need to iterate quickly. Some tests can be run quickly and cheaply (notably the tests of the retrieval system) while others (like the LLM judge) are more expensive and time-consuming. You should structure your RAG tests and evaluation to reflect this, with some tests running frequently and others running less often, just as you would in any other software project. There's more we could improve our evaluation system, but for now we can continue onwards to [adding a reranker](../reranking/README.md) to improve our retrieval. This will allow us to improve the performance of our retrieval system without needing to retrain the embeddings. We'll cover this in the next section. ## Try it out! To see how this works in practice, you can run the evaluation pipeline using the project code. This will give you a sense of how the evaluation process works in practice and you can of course then play with and modify the evaluation code. To run the evaluation pipeline, first clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` Then navigate to the `llm-complete-guide` directory and follow the instructions in the `README.md` file to run the evaluation pipeline. (You'll have to have first run the main pipeline to generate the embeddings.) To run the evaluation pipeline, you can use the following command: ```bash python run.py --evaluation ``` This will run the evaluation pipeline and output the results to the console. You can then inspect the progress, logs and results in the dashboard!
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/evaluation/generation.md ================ --- description: Evaluate the generation component of your RAG pipeline. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Generation evaluation Now that we have a sense of how to evaluate the retrieval component of our RAG pipeline, let's move on to the generation component. The generation component is responsible for generating the answer to the question based on the retrieved context. At this point, our evaluation starts to move into more subjective territory. It's harder to come up with metrics that can accurately capture the quality of the generated answers. However, there are some things we can do. As with the [retrieval evaluation](retrieval.md), we can start with a simple approach and then move on to more sophisticated methods. ## Handcrafted evaluation tests As in the retrieval evaluation, we can start by putting together a set of examples where we know that our generated output should or shouldn't include certain terms. For example, if we're generating answers to questions about which orchestrators ZenML supports, we can check that the generated answers include terms like "Airflow" and "Kubeflow" (since we do support them) and exclude terms like "Flyte" or "Prefect" (since we don't (yet!) support them). These handcrafted tests should be driven by mistakes that you've already seen in the RAG output. The negative example of "Flyte" and "Prefect" showing up in the list of supported orchestrators, for example, shows up sometimes when you use GPT 3.5 as the LLM. ![](/docs/book/.gitbook/assets/generation-eval-manual.png) As another example, when you make a query asking 'what is the default orchestrator in ZenML?' you would expect that the answer would include the word 'local', so we can make a test case to confirm that. You can view our starter set of these tests [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py#L28-L55). It's better to start with something small and simple and then expand as is needed. There's no need for complicated harnesses or frameworks at this stage. **`bad_answers` table:** | Question | Bad Words | |----------|-----------| | What orchestrators does ZenML support? | AWS Step Functions, Flyte, Prefect, Dagster | | What is the default orchestrator in ZenML? | Flyte, AWS Step Functions | **`bad_immediate_responses` table:** | Question | Bad Words | |----------|-----------| | Does ZenML support the Flyte orchestrator out of the box? | Yes | **`good_responses` table:** | Question | Good Words | |----------|------------| | What are the supported orchestrators in ZenML? Please list as many of the supported ones as possible. | Kubeflow, Airflow | | What is the default orchestrator in ZenML? | local | Each type of test then catches a specific type of mistake. For example: ```python class TestResult(BaseModel): success: bool question: str keyword: str = "" response: str def test_content_for_bad_words( item: dict, n_items_retrieved: int = 5 ) -> TestResult: question = item["question"] bad_words = item["bad_words"] response = process_input_with_retrieval( question, n_items_retrieved=n_items_retrieved ) for word in bad_words: if word in response: return TestResult( success=False, question=question, keyword=word, response=response, ) return TestResult(success=True, question=question, response=response) ``` Here we're testing that a particular word doesn't show up in the generated response. If we find the word, then we return a failure, otherwise we return a success. This is a simple example, but you can imagine more complex tests that check for the presence of multiple words, or the presence of a word in a particular context. We pass these custom tests into a test runner that keeps track of how many are failing and also logs those to the console when they do: ```python def run_tests(test_data: list, test_function: Callable) -> float: failures = 0 total_tests = len(test_data) for item in test_data: test_result = test_function(item) if not test_result.success: logging.error( f"Test failed for question: '{test_result.question}'. Found word: '{test_result.keyword}'. Response: '{test_result.response}'" ) failures += 1 failure_rate = (failures / total_tests) * 100 logging.info( f"Total tests: {total_tests}. Failures: {failures}. Failure rate: {failure_rate}%" ) return round(failure_rate, 2) ``` Our end-to-end evaluation of the generation component is then a combination of these tests: ```python @step def e2e_evaluation() -> ( Annotated[float, "failure_rate_bad_answers"], Annotated[float, "failure_rate_bad_immediate_responses"], Annotated[float, "failure_rate_good_responses"], ): logging.info("Testing bad answers...") failure_rate_bad_answers = run_tests( bad_answers, test_content_for_bad_words ) logging.info(f"Bad answers failure rate: {failure_rate_bad_answers}%") logging.info("Testing bad immediate responses...") failure_rate_bad_immediate_responses = run_tests( bad_immediate_responses, test_response_starts_with_bad_words ) logging.info( f"Bad immediate responses failure rate: {failure_rate_bad_immediate_responses}%" ) logging.info("Testing good responses...") failure_rate_good_responses = run_tests( good_responses, test_content_contains_good_words ) logging.info( f"Good responses failure rate: {failure_rate_good_responses}%" ) return ( failure_rate_bad_answers, failure_rate_bad_immediate_responses, failure_rate_good_responses, ) ``` Running the tests using different LLMs will give different results. Here our Ollama Mixtral did worse than GPT 3.5, for example, but there were still some failures with GPT 3.5. This is a good way to get a sense of how well your generation component is doing. As you become more familiar with the kinds of outputs your LLM generates, you can add the hard ones to this test suite. This helps prevent regressions and is directly related to the quality of the output you're getting. This way you can optimize for your specific use case. ## Automated evaluation using another LLM Another way to evaluate the generation component is to use another LLM to grade the output of the LLM you're evaluating. This is a more sophisticated approach and requires a bit more setup. We can use the pre-generated questions and the associated context as input to the LLM and then use another LLM to assess the quality of the output on a scale of 1 to 5. This is a more quantitative approach and since it's automated it can run across a larger set of data. ![](/docs/book/.gitbook/assets/generation-eval-automated.png) {% hint style="warning" %} LLMs don't always do well on this kind of evaluation where numbers are involved. There are some studies showing that LLMs can be biased towards certain numbers or ranges of numbers. This is something to keep in mind when using this approach. Qualitative evaluations are often more reliable but then that means a human has to do the evaluation. {% endhint %} We can start by setting up a Pydantic model to hold the data we need. We set constraints to ensure that the data we're getting back are only integers between 1 and 5, inclusive: ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) faithfulness: conint(ge=1, le=5) helpfulness: conint(ge=1, le=5) relevance: conint(ge=1, le=5) ``` We can use this in a test function that: - takes a question and a context as inputs - generates an answer using the LLM we're evaluating - makes a call to an (optionally different) LLM we're using to judge the quality of the answer getting back a score for each of the four categories in JSON format - parses the JSON and returns the result of the evaluation as our Pydantic model instance Pydantic handles the validation of the JSON input for us, so we can be sure that we're getting the data we expect and in a form that we can use. ```python def llm_judged_test_e2e( question: str, context: str, n_items_retrieved: int = 5, ) -> LLMJudgedTestResult: """E2E tests judged by an LLM. Args: item (dict): The item to test. n_items_retrieved (int): The number of items to retrieve. Returns: LLMJudgedTestResult: The result of the test. """ logging.debug("Starting LLM judged test...") response = process_input_with_retrieval( question, n_items_retrieved=n_items_retrieved ) logging.debug("Input processed with retrieval.") prompt = f""" Please analyze the following text and provided context to provide scores for toxicity, faithfulness, helpfulness, and relevance. The toxicity score should be an integer between 1 and 5, where 1 is not toxic and 5 is very toxic. The faithfulness score should be an integer between 1 and 5, where 1 is not faithful to the provided context and 5 is very faithful. The helpfulness score should be an integer between 1 and 5, where 1 is not helpful and 5 is very helpful. The relevance score should be an integer between 1 and 5, where 1 is not relevant given the context and 5 is very relevant. **Text:** {response} **Context:** {context} **Output format:** {{ "toxicity": int, "faithfulness": int, "helpfulness": int, "relevance": int }} """ logging.debug("Prompt created.") response = completion( model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}] ) json_output = response["choices"][0]["message"]["content"].strip() logging.info("Received response from model.") logging.debug(json_output) try: return LLMJudgedTestResult(**json.loads(json_output)) except json.JSONDecodeError as e: logging.error(f"JSON bad output: {json_output}") raise e ``` Currently we're not handling retries of the output from the LLM in the case where the JSON isn't output correctly, but potentially that's something we might want to do. We can then run this test across a set of questions and contexts: ```python def run_llm_judged_tests( test_function: Callable, sample_size: int = 50, ) -> Tuple[ Annotated[float, "average_toxicity_score"], Annotated[float, "average_faithfulness_score"], Annotated[float, "average_helpfulness_score"], Annotated[float, "average_relevance_score"], ]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") # Shuffle the dataset and select a random sample sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) total_tests = len(sampled_dataset) total_toxicity = 0 total_faithfulness = 0 total_helpfulness = 0 total_relevance = 0 for item in sampled_dataset: question = item["generated_questions"][0] context = item["page_content"] try: result = test_function(question, context) except json.JSONDecodeError as e: logging.error(f"Failed for question: {question}. Error: {e}") total_tests -= 1 continue total_toxicity += result.toxicity total_faithfulness += result.faithfulness total_helpfulness += result.helpfulness total_relevance += result.relevance average_toxicity_score = total_toxicity / total_tests average_faithfulness_score = total_faithfulness / total_tests average_helpfulness_score = total_helpfulness / total_tests average_relevance_score = total_relevance / total_tests return ( round(average_toxicity_score, 3), round(average_faithfulness_score, 3), round(average_helpfulness_score, 3), round(average_relevance_score, 3), ) ``` You'll want to use your most capable and reliable LLM to do the judging. In our case, we used the new GPT-4 Turbo. The quality of the evaluation is only as good as the LLM you're using to do the judging and there is a large difference between GPT-3.5 and GPT-4 Turbo in terms of the quality of the output, not least in its ability to output JSON correctly. Here was the output following an evaluation for 50 randomly sampled datapoints: ```shell Step e2e_evaluation_llm_judged has started. Average toxicity: 1.0 Average faithfulness: 4.787 Average helpfulness: 4.595 Average relevance: 4.87 Step e2e_evaluation_llm_judged has finished in 8m51s. Pipeline run has finished in 8m52s. ``` This took around 9 minutes to run using GPT-4 Turbo as the evaluator and the default GPT-3.5 as the LLM being evaluated. To take this further, there are a number of ways it might be improved: - **Retries**: As mentioned above, we're not currently handling retries of the output from the LLM in the case where the JSON isn't output correctly. This could be improved by adding a retry mechanism that waits for a certain amount of time before trying again. (We could potentially use the [`instructor`](https://github.com/jxnl/instructor) library to handle this specifically.) - **Use OpenAI's 'JSON mode'**: OpenAI has a [JSON mode](https://platform.openai.com/docs/guides/text-generation/json-mode) that can be used to ensure that the output is always in JSON format. This could be used to ensure that the output is always in the correct format. - **More sophisticated evaluation**: The evaluation we're doing here is quite simple. We're just asking for a score in four categories. There are more sophisticated ways to evaluate the quality of the output, such as using multiple evaluators and taking the average score, or using a more complex scoring system that takes into account the context of the question and the context of the answer. - **Batch processing**: We're running the evaluation one question at a time here. It would be more efficient to run the evaluation in batches to speed up the process. - **More data**: We're only using 50 samples here. This could be increased to get a more accurate picture of the quality of the output. - **More LLMs**: We're only using GPT-4 Turbo here. It would be interesting to see how other LLMs perform as evaluators. - **Handcrafted questions based on context**: We're using the generated questions here. It would be interesting to see how the LLM performs when given handcrafted questions that are based on the context of the question. - **Human in the loop**: The LLM actually provides qualitative feedback on the output as well as the JSON scores. This data could be passed into an annotation tool to get human feedback on the quality of the output. This would be a more reliable way to evaluate the quality of the output and would offer some insight into the kinds of mistakes the LLM is making. Most notably, the scores we're currently getting are pretty high, so it would make sense to pass in harder questions and be more specific in the judging criteria. This will give us more room to improve as it is sure that the system is not perfect. While this evaluation approach serves as a solid foundation, it's worth noting that there are other frameworks available that can further enhance the evaluation process. Frameworks such as [`ragas`](https://github.com/explodinggradients/ragas), [`trulens`](https://www.trulens.org/), [DeepEval](https://docs.confident-ai.com/), and [UpTrain](https://github.com/uptrain-ai/uptrain) can be integrated with ZenML depending on your specific use-case and understanding of the underlying concepts. These frameworks, although potentially complex to set up and use, can provide more sophisticated evaluation capabilities as your project evolves and grows in complexity. We now have a working evaluation of both the retrieval and generation evaluation components of our RAG pipeline. We can use this to track how our pipeline improves as we make changes to the retrieval and generation components. ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) repository and for this section, particularly [the `eval_e2e.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/evaluation/README.md ================ --- description: Track how your RAG pipeline improves using evaluation and metrics. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Evaluation and metrics In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively. Our RAG pipeline is a whole system, moreover, not just a model, and evaluating it requires a holistic approach. We'll look at various ways to evaluate the performance of your RAG pipeline but the two main areas we'll focus on are: * [Retrieval evaluation](retrieval.md), so checking that the retrieved documents or document chunks are relevant to the query. * [Generation evaluation](generation.md), so checking that the generated text is coherent and helpful for our specific use case. ![](../../../.gitbook/assets/evaluation-two-parts.png) In the previous section we built out a basic RAG pipeline for our documentation question-and-answer use case. We'll use this pipeline to demonstrate how to evaluate the performance of your RAG pipeline. {% hint style="info" %} If you were running this in a production setting, you might want to set up evaluation to check the performance of a raw LLM model (i.e. without any retrieval / RAG components) as a baseline, and then compare this to the performance of your RAG pipeline. This will help you understand how much value the retrieval and generation components are adding to your system. We won't cover this here, but it's a good practice to keep in mind. {% endhint %} ## What are we evaluating? When evaluating the performance of your RAG pipeline, your specific use case and the extent to which you can tolerate errors or lower performance will determine what you need to evaluate. For instance, if you're building a user-facing chatbot, you might need to evaluate the following: * Are the retrieved documents relevant to the query? * Is the generated answer coherent and helpful for your specific use case? * Does the generated answer contain hate speech or any sort of toxic language? These are just examples, and the specific metrics and methods you use will depend on your use case. The [generation evaluation](generation.md) functions as an end-to-end evaluation of the RAG pipeline, as it checks the final output of the system. It's during these end-to-end evaluations that you'll have most leeway to use subjective metrics, as you're evaluating the system as a whole. Before we dive into the details, let's take a moment to look at [a short high-level code example](evaluation-in-65-loc.md) showcasing the two main areas of evaluation. Afterwards the following sections will cover the two main areas of evaluation in more detail [as well as offer practical guidance](../evaluation/evaluation-in-practice.md) on when to run these evaluations and what to look for in the results.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md ================ --- description: See how the retrieval component responds to changes in the pipeline. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Retrieval evaluation The retrieval component of our RAG pipeline is responsible for finding relevant documents or document chunks to feed into the generation component. In this section we'll explore how to evaluate the performance of the retrieval component of your RAG pipeline. We're checking how accurate the semantic search is, or in other words how relevant the retrieved documents are to the query. Our retrieval component takes the incoming query and converts it into a vector or embedded representation that can be used to search for relevant documents. We then use this representation to search through a corpus of documents and retrieve the most relevant ones. ## Manual evaluation using handcrafted queries The most naive and simple way to check this would be to handcraft some queries where we know the specific documents needed to answer it. We can then check if the retrieval component is able to retrieve these documents. This is a manual evaluation process and can be time-consuming, but it's a good way to get a sense of how well the retrieval component is working. It can also be useful to target known edge cases or difficult queries to see how the retrieval component handles those known scenarios. ![](/docs/book/.gitbook/assets/retrieval-eval-manual.png) Implementing this is pretty simple - you just need to create some queries and check the retrieved documents. Having tested the basic inference of our RAG setup quite a bit, there were some clear areas where the retrieval component could be improved. I looked in our documentation to find some examples where the information could only be found in a single page and then wrote some queries that would require the retrieval component to find that page. For example, the query "How do I get going with the Label Studio integration? What are the first steps?" would require the retrieval component to find [the Label Studio integration page](../../../component-guide/annotators/label-studio.md). Some of the other examples used are: | Question | URL Ending | |----------|------------| | How do I get going with the Label Studio integration? What are the first steps? | stacks-and-components/component-guide/annotators/label-studio | | How can I write my own custom materializer? | user-guide/advanced-guide/data-management/handle-custom-data-types | | How do I generate embeddings as part of a RAG pipeline when using ZenML? | user-guide/llmops-guide/rag-with-zenml/embeddings-generation | | How do I use failure hooks in my ZenML pipeline? | user-guide/advanced-guide/pipelining-features/use-failure-success-hooks | | Can I deploy ZenML self-hosted with Helm? How do I do it? | deploying-zenml/zenml-self-hosted/deploy-with-helm | For the retrieval pipeline, all we have to do is encode the query as a vector and then query the PostgreSQL database for the most similar vectors. We then check whether the URL for the document we thought must show up is actually present in the top `n` results. ```python def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) db_conn = get_db_conn() top_similar_docs_urls = get_topn_similar_docs( embedded_question, db_conn, n=5, only_urls=True ) urls = [url[0] for url in top_similar_docs_urls] # Unpacking URLs from tuples return (question, url_ending, urls) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: total_tests = len(question_doc_pairs) failures = 0 for pair in question_doc_pairs: question, url_ending, urls = query_similar_docs( pair["question"], pair["url_ending"] ) if all(url_ending not in url for url in urls): logging.error( f"Failed for question: {question}. Expected URL ending: {url_ending}. Got: {urls}" ) failures += 1 logging.info(f"Total tests: {total_tests}. Failures: {failures}") failure_rate = (failures / total_tests) * 100 return round(failure_rate, 2) ``` We include some logging so that when running the pipeline locally we can get some immediate feedback logged to the console. This functionality can then be packaged up into a ZenML step once we're happy it does what we need: ```python @step def retrieval_evaluation_small() -> Annotated[float, "small_failure_rate_retrieval"]: failure_rate = test_retrieved_docs_retrieve_best_url(question_doc_pairs) logging.info(f"Retrieval failure rate: {failure_rate}%") return failure_rate ``` We got a 20% failure rate on the first run of this test, which was a good sign that the retrieval component could be improved. We only had 5 test cases, so this was just a starting point. In reality, you'd want to keep adding more test cases to cover a wider range of scenarios. You'll discover these failure cases as you use the system more and more, so it's a good idea to keep a record of them and add them to your test suite. You'd also want to examine the logs to see exactly which query failed. In our case, checking the logs in the ZenML dashboard, we find the following: ``` Failed for question: How do I generate embeddings as part of a RAG pipeline when using ZenML?. Expected URL ending: user-guide/llmops-guide/ rag-with-zenml/embeddings-generation. Got: ['https://docs.zenml.io/user-guide/ llmops-guide/rag-with-zenml/data-ingestion', 'https://docs.zenml.io/user-guide/ llmops-guide/rag-with-zenml/understanding-rag', 'https://docs.zenml.io/v/docs/ user-guide/advanced-guide/data-management/handle-custom-data-types', 'https://docs. zenml.io/user-guide/llmops-guide/rag-with-zenml', 'https://docs.zenml.io/v/docs/ user-guide/llmops-guide/rag-with-zenml'] ``` We can maybe take a look at those documents to see why they were retrieved and not the one we expected. This is a good way to iteratively improve the retrieval component. ## Automated evaluation using synthetic generated queries For a broader evaluation we can examine a larger number of queries to check the retrieval component's performance. We do this by using an LLM to generate synthetic data. In our case we take the text of each document chunk and pass it to an LLM, telling it to generate a question. ![](/docs/book/.gitbook/assets/retrieval-eval-automated.png) For example, given the text: ``` zenml orchestrator connect ${ORCHESTRATOR\_NAME} -iHead on over to our docs to learn more about orchestrators and how to configure them. Container Registry export CONTAINER\_REGISTRY\_NAME=gcp\_container\_registry zenml container-registry register $ {CONTAINER\_REGISTRY\_NAME} --flavor=gcp --uri= # Connect the GCS orchestrator to the target gcp project via a GCP Service Connector zenml container-registry connect ${CONTAINER\_REGISTRY\_NAME} -i Head on over to our docs to learn more about container registries and how to configure them. 7) Create Stack export STACK\_NAME=gcp\_stack zenml stack register ${STACK\_NAME} -o $ {ORCHESTRATOR\_NAME} \\ a ${ARTIFACT\_STORE\_NAME} -c ${CONTAINER\_REGISTRY\_NAME} --set In case you want to also add any other stack components to this stack, feel free to do so. And you're already done! Just like that, you now have a fully working GCP stack ready to go. Feel free to take it for a spin by running a pipeline on it. Cleanup If you do not want to use any of the created resources in the future, simply delete the project you created. gcloud project delete
ZenML Scarf
PreviousScale compute to the cloud NextConfiguring ZenML Last updated 2 days ago ``` we might get the question: ``` How do I create and configure a GCP stack in ZenML using an orchestrator, container registry, and stack components, and how do I delete the resources when they are no longer needed? ``` If we generate questions for all of our chunks, we can then use these question-chunk pairs to evaluate the retrieval component. We pass the generated query to the retrieval component and then we check if the URL for the original document is in the top `n` results. To generate the synthetic queries we can use the following code: ```python from typing import List from litellm import completion from structures import Document from zenml import step LOCAL_MODEL = "ollama/mixtral" def generate_question(chunk: str, local: bool = False) -> str: model = LOCAL_MODEL if local else "gpt-3.5-turbo" response = completion( model=model, messages=[ { "content": f"This is some text from ZenML's documentation. Please generate a question that can be asked about this text: `{chunk}`", "role": "user", } ], api_base="http://localhost:11434" if local else None, ) return response.choices[0].message.content @step def generate_questions_from_chunks( docs_with_embeddings: List[Document], local: bool = False, ) -> List[Document]: for doc in docs_with_embeddings: doc.generated_questions = [generate_question(doc.page_content, local)] assert all(doc.generated_questions for doc in docs_with_embeddings) return docs_with_embeddings ``` As you can see, we're using [`litellm`](https://docs.litellm.ai/) again as the wrapper for the API calls. This allows us to switch between using a cloud LLM API (like OpenAI's GPT3.5 or 4) and a local LLM (like a quantized version of Mistral AI's Mixtral made available with [Ollama](https://ollama.com/). This has a number of advantages: - you keep your costs down by using a local model - you can iterate faster by not having to wait for API calls - you can use the same code for both local and cloud models For some tasks you'll want to use the best model your budget can afford, but for this task of question generation we're fine using a local and slightly less capable model. Even better is that it'll be much faster to generate the questions, especially using the basic setup we have here. To give you an indication of how long this process takes, generating 1800+ questions from an equivalent number of documentation chunks took a little over 45 minutes using the local model on a GPU-enabled machine with Ollama. ![](/docs/book/.gitbook/assets/hf-qa-embedding-questions.png) You can [view the generated dataset](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions) on the Hugging Face Hub [here](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions). This dataset contains the original document chunks, the generated questions, and the URL reference for the original document. Once we have the generated questions, we can then pass them to the retrieval component and check the results. For convenience we load the data from the Hugging Face Hub and then pass it to the retrieval component for evaluation. We shuffle the data and select a subset of it to speed up the evaluation process, but for a more thorough evaluation you could use the entire dataset. (The best practice of keeping a separate set of data for evaluation purposes is also recommended here, though we're not doing that in this example.) ```python @step def retrieval_evaluation_full( sample_size: int = 50, ) -> Annotated[float, "full_failure_rate_retrieval"]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) total_tests = len(sampled_dataset) failures = 0 for item in sampled_dataset: generated_questions = item["generated_questions"] question = generated_questions[ 0 ] # Assuming only one question per item url_ending = item["filename"].split("/")[ -1 ] # Extract the URL ending from the filename _, _, urls = query_similar_docs(question, url_ending) if all(url_ending not in url for url in urls): logging.error( f"Failed for question: {question}. Expected URL ending: {url_ending}. Got: {urls}" ) failures += 1 logging.info(f"Total tests: {total_tests}. Failures: {failures}") failure_rate = (failures / total_tests) * 100 return round(failure_rate, 2) ``` When we run this as part of the evaluation pipeline, we get a 16% failure rate which again tells us that we're doing pretty well but that there is room for improvement. As a baseline, this is a good starting point. We can then iterate on the retrieval component to improve its performance. To take this further, there are a number of ways it might be improved: - **More diverse question generation**: The current question generation approach uses a single prompt to generate questions based on the document chunks. You could experiment with different prompts or techniques to generate a wider variety of questions that test the retrieval component more thoroughly. For example, you could prompt the LLM to generate questions of different types (factual, inferential, hypothetical, etc.) or difficulty levels. - **Semantic similarity metrics**: In addition to checking if the expected URL is retrieved, you could calculate semantic similarity scores between the query and the retrieved documents using metrics like cosine similarity. This would give you a more nuanced view of retrieval performance beyond just binary success/failure. You could track average similarity scores and use them as a target metric to improve. - **Comparative evaluation**: Test out different retrieval approaches (e.g. different embedding models, similarity search algorithms, etc.) and compare their performance on the same set of queries. This would help identify the strengths and weaknesses of each approach. - **Error analysis**: Do a deeper dive into the failure cases to understand patterns and potential areas for improvement. Are certain types of questions consistently failing? Are there common characteristics among the documents that aren't being retrieved properly? Insights from error analysis can guide targeted improvements to the retrieval component. To wrap up, the retrieval evaluation process we've walked through - from manual spot-checking with carefully crafted queries to automated testing with synthetic question-document pairs - has provided a solid baseline understanding of our retrieval component's performance. The failure rates of 20% on our handpicked test cases and 16% on a larger sample of generated queries highlight clear room for improvement, but also validate that our semantic search is generally pointing in the right direction. Going forward, we have a rich set of options to refine and upgrade our evaluation approach. Generating a more diverse array of test questions, leveraging semantic similarity metrics for a nuanced view beyond binary success/failure, performing comparative evaluations of different retrieval techniques, and conducting deep error analysis on failure cases - all of these avenues promise to yield valuable insights. As our RAG pipeline grows to handle more complex and wide-ranging queries, continued investment in comprehensive retrieval evaluation will be essential to ensure we're always surfacing the most relevant information. Before we start working to improve or tweak our retrieval based on these evaluation results, let's shift gears and look at how we can evaluate the generation component of our RAG pipeline. Assessing the quality of the final answers produced by the system is equally crucial to gauging the effectiveness of our retrieval. Retrieval is only half the story. The true test of our system is the quality of the final answers it generates by combining retrieved content with LLM intelligence. In the next section, we'll dive into a parallel evaluation process for the generation component, exploring both automated metrics and human assessment to get a well-rounded picture of our RAG pipeline's end-to-end performance. By shining a light on both halves of the RAG architecture, we'll be well-equipped to iterate and optimize our way to an ever more capable and reliable question-answering system. ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) repository and for this section, particularly [the `eval_retrieval.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetuned-embeddings.md ================ --- description: Evaluate finetuned embeddings and compare to original base embeddings. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} Now that we've finetuned our embeddings, we can evaluate them and compare to the base embeddings. We have all the data saved and versioned already, and we will reuse the same MatryoshkaLoss function for evaluation. In code, our evaluation steps are easy to comprehend. Here, for example, is the base model evaluation step: ```python from zenml import log_model_metadata, step def evaluate_model( dataset: DatasetDict, model: SentenceTransformer ) -> Dict[str, float]: """Evaluate the given model on the dataset.""" evaluator = get_evaluator( dataset=dataset, model=model, ) return evaluator(model) @step def evaluate_base_model( dataset: DatasetDict, ) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: """Evaluate the base model on the given dataset.""" model = SentenceTransformer( EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu", ) results = evaluate_model( dataset=dataset, model=model, ) # Convert numpy.float64 values to regular Python floats # (needed for serialization) base_model_eval = { f"dim_{dim}_cosine_ndcg@10": float( results[f"dim_{dim}_cosine_ndcg@10"] ) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS } log_model_metadata( metadata={"base_model_eval": base_model_eval}, ) return results ``` We log the results for our core Matryoshka dimensions as model metadata to ZenML within our evaluation step. This will allow us to inspect these results from within [the Model Control Plane](../../../how-to/model-management-metrics/model-control-plane/README.md) (see below for more details). Our results come in the form of a dictionary of string keys and float values which will, like all step inputs and outputs, be versioned, tracked and saved in your artifact store. ## Visualizing results It's possible to visualize results in a few different ways in ZenML, but one easy option is just to output your chart as an `PIL.Image` object. (See our [documentation on more ways to visualize your results](../../../how-to/data-artifact-management/visualize-artifacts/README.md).) The rest the implementation of our `visualize_results` step is just simple `matplotlib` code to plot out the base model evaluation against the finetuned model evaluation. We represent the results as percentage values and horizontally stack the two sets to make comparison a little easier. ![Visualizing finetuned embeddings evaluation results](../../../.gitbook/assets/finetuning-embeddings-visualization.png) We can see that our finetuned embeddings have improved the recall of our retrieval system across all of the dimensions, but the results are still not amazing. In a production setting, we would likely want to focus on improving the data being used for the embeddings training. In particular, we could consider stripping out some of the logs output from the documentation, and perhaps omit some pages which offer low signal for the retrieval task. This embeddings finetuning was run purely on the full set of synthetic data generated by `distilabel` and `gpt-4o`, so we wouldn't necessarily expect to see huge improvements out of the box, especially when the underlying data chunks are complex and contain multiple topics. ## Model Control Plane as unified interface Once all our pipelines are finished running, the best place to inspect our results as well as the artifacts and models we generated is the Model Control Plane. ![Model Control Plane](../../../.gitbook/assets/mcp-embeddings.gif) The interface is split into sections that correspond to: - the artifacts generated by our steps - the models generated by our steps - the metadata logged by our steps - (potentially) any deployments of models made, though we didn't use this in this guide so far - any pipeline runs associated with this 'Model' We can easily see which are the latest artifact or technical model versions, as well as compare the actual values of our evals or inspect the hardware or hyperparameters used for training. This one-stop-shop interface is available on ZenML Pro and you can learn more about it in the [Model Control Plane documentation](../../../how-to/model-management-metrics/model-control-plane/README.md). ## Next Steps Now that we've finetuned our embeddings and evaluated them, when they were in a good shape for use we could bring these into [the original RAG pipeline](../rag-with-zenml/basic-rag-inference-pipeline.md), regenerate a new series of embeddings for our data and then rerun our RAG retrieval evaluations to see how they've improved in our hand-crafted and LLM-powered evaluations. The next section will cover [LLM finetuning and deployment](../finetuning-llms/finetuning-llms.md) as the final part of our LLMops guide. (This section is currently still a work in progress, but if you're eager to try out LLM finetuning with ZenML, you can use [our LoRA project](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md) to get started. We also have [a blogpost](https://www.zenml.io/blog/how-to-finetune-llama-3-1-with-zenml) guide which takes you through [all the steps you need to finetune Llama 3.1](https://www.zenml.io/blog/how-to-finetune-llama-3-1-with-zenml) using GCP's Vertex AI with ZenML, including one-click stack creation!) To try out the two pipelines, please follow the instructions in [the project repository README](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/README.md), and you can find the full code in that same directory.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings-with-sentence-transformers.md ================ --- description: Finetune embeddings with Sentence Transformers. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} We now have a dataset that we can use to finetune our embeddings. You can [inspect the positive and negative examples](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) on the Hugging Face [datasets page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) since our previous pipeline pushed the data there. ![Synthetic data generated with distilabel for embeddings finetuning](../../../.gitbook/assets/distilabel-synthetic-dataset-hf.png) Our pipeline for finetuning the embeddings is relatively simple. We'll do the following: - load our data either from Hugging Face or [from Argilla via the ZenML annotation integration](../../../component-guide/annotators/argilla.md) - finetune our model using the [Sentence Transformers](https://www.sbert.net/) library - evaluate the base and finetuned embeddings - visualize the results of the evaluation ![Embeddings finetuning pipeline with Sentence Transformers and ZenML](../../../.gitbook/assets/rag-finetuning-embeddings-pipeline.png) ## Loading data By default the pipeline will load the data from our Hugging Face dataset. If you've annotated your data in Argilla, you can load the data from there instead. You'll just need to pass an `--argilla` flag to the Python invocation when you're running the pipeline like so: ```bash python run.py --embeddings --argilla ``` This assumes that you've set up an Argilla annotator in your stack. The code checks for the annotator and downloads the data that was annotated in Argilla. Please see our [guide to using the Argilla integration with ZenML](../../../component-guide/annotators/argilla.md) for more details. ## Finetuning with Sentence Transformers The `finetune` step in the pipeline is responsible for finetuning the embeddings model using the Sentence Transformers library. Let's break down the key aspects of this step: 1. **Model Loading**: The code loads the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) using the Sentence Transformers library. It utilizes the SDPA (Self-Distilled Pruned Attention) implementation for efficient training with Flash Attention 2. 2. **Loss Function**: The finetuning process employs a custom loss function called `MatryoshkaLoss`. This loss function is a wrapper around the `MultipleNegativesRankingLoss` provided by Sentence Transformers. The Matryoshka approach involves training the model with different embedding dimensions simultaneously. It allows the model to learn embeddings at various granularities, improving its performance across different embedding sizes. 3. **Dataset Preparation**: The training dataset is loaded from the provided `dataset` parameter. The code saves the training data to a temporary JSON file and then loads it using the Hugging Face `load_dataset` function. 4. **Evaluator**: An evaluator is created using the `get_evaluator` function. The evaluator is responsible for assessing the model's performance during training. 5. **Training Arguments**: The code sets up the training arguments using the `SentenceTransformerTrainingArguments` class. It specifies various hyperparameters such as the number of epochs, batch size, learning rate, optimizer, precision (TF32 and BF16), and evaluation strategy. 6. **Trainer**: The `SentenceTransformerTrainer` is initialized with the model, training arguments, training dataset, loss function, and evaluator. The trainer handles the training process. The `trainer.train()` method is called to start the finetuning process. The model is trained for the specified number of epochs using the provided hyperparameters. 7. **Model Saving**: After training, the finetuned model is pushed to the Hugging Face Hub using the `trainer.model.push_to_hub()` method. The model is saved with the specified ID (`EMBEDDINGS_MODEL_ID_FINE_TUNED`). 9. **Metadata Logging**: The code logs relevant metadata about the training process, including the training parameters, hardware information, and accelerator details. 10. **Model Rehydration**: To handle materialization errors, the code saves the trained model to a temporary file, loads it back into a new `SentenceTransformer` instance, and returns the rehydrated model. (*Thanks and credit to Phil Schmid for [his tutorial on finetuning embeddings](https://www.philschmid.de/fine-tune-embedding-model-for-rag) with Sentence Transformers and a Matryoshka loss function. This project uses many ideas and some code from his implementation.*) ## Finetuning in code Here's a simplified code snippet highlighting the key parts of the finetuning process: ```python # Load the base model model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) # Define the loss function train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) # Prepare the training dataset train_dataset = load_dataset("json", data_files=train_dataset_path) # Set up the training arguments args = SentenceTransformerTrainingArguments(...) # Create the trainer trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) # Start training trainer.train() # Save the finetuned model trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` The finetuning process leverages the capabilities of the Sentence Transformers library to efficiently train the embeddings model. The Matryoshka approach allows for learning embeddings at different dimensions simultaneously, enhancing the model's performance across various embedding sizes. Our model is finetuned, saved in the Hugging Face Hub for easy access and reference in subsequent steps, but also versioned and tracked within ZenML for full observability. At this point the pipeline will evaluate the base and finetuned embeddings and visualize the results.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md ================ --- description: Finetune embeddings on custom synthetic data to improve retrieval performance. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} We previously learned [how to use RAG with ZenML](../rag-with-zenml/README.md) to build a production-ready RAG pipeline. In this section, we will explore how to optimize and maintain your embedding models through synthetic data generation and human feedback. So far, we've been using off-the-shelf embeddings, which provide a good baseline and decent performance on standard tasks. However, you can often significantly improve performance by finetuning embeddings on your own domain-specific data. Our RAG pipeline uses a retrieval-based approach, where it first retrieves the most relevant documents from our vector database, and then uses a language model to generate a response based on those documents. By finetuning our embeddings on a dataset of technical documentation similar to our target domain, we can improve the retrieval step and overall performance of the RAG pipeline. The work of finetuning embeddings based on synthetic data and human feedback is a multi-step process. We'll go through the following steps: - [generating synthetic data with `distilabel`](synthetic-data-generation.md) - [finetuning embeddings with Sentence Transformers](finetuning-embeddings-with-sentence-transformers.md) - [evaluating finetuned embeddings and using ZenML's model control plane to get a systematic overview](evaluating-finetuned-embeddings.md) Besides ZenML, we will do this by using two open source libraries: [`argilla`](https://github.com/argilla-io/argilla/) and [`distilabel`](https://github.com/argilla-io/distilabel). Both of these libraries focus optimizing model outputs through improving data quality, however, each one of them takes a different approach to tackle the same problem. `distilabel` provides a scalable and reliable approach to distilling knowledge from LLMs by generating synthetic data or providing AI feedback with LLMs as judges. `argilla` enables AI engineers and domain experts to collaborate on data projects by allowing them to organize and explore data through within an interactive and engaging UI. Both libraries can be used individually but they work better together. We'll showcase their use via ZenML pipelines. To follow along with the example explained in this guide, please follow the instructions in [the `llm-complete-guide` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) where the full code is also available. This specific section on embeddings finetuning can be run locally or using cloud compute as you prefer.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-generation.md ================ --- description: Generate synthetic data with distilabel to finetune embeddings. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} We already have [a dataset of technical documentation](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) that was generated previously while we were working on the RAG pipeline. We'll use this dataset to generate synthetic data with `distilabel`. You can inspect the data directly [on the Hugging Face dataset page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0). ![](../../../.gitbook/assets/rag-dataset-hf.png) As you can see, it is made up of some `page_content` (our chunks) as well as the source URL from where the chunk was taken from. With embeddings, what we're going to want to do is pair the `page_content` with a question that we want to answer. In a pre-LLM world we might have actually created a new column and worked to manually craft questions for each chunk. However, with LLMs, we can use the `page_content` to generate questions. ## Pipeline overview Our pipeline to generate synthetic data will look like this: ![](../../../.gitbook/assets/rag-synthetic-data-pipeline.png) We'll load the Hugging Face dataset, then we'll use `distilabel` to generate the synthetic data. To finish off, we'll push the newly-generated data to a new Hugging Face dataset and also push the same data to our Argilla instance for annotation and inspection. ## Synthetic data generation [`distilabel`](https://github.com/argilla-io/distilabel) provides a scalable and reliable approach to distilling knowledge from LLMs by generating synthetic data or providing AI feedback with LLMs as judges. We'll be using it a relatively simple use case to generate some queries appropriate to our documentation chunks, but it can be used for a variety of other tasks. We can set up a `distilabel` pipeline easily in our ZenML step to handle the dataset creation. We'll be using `gpt-4o` as the LLM to generate the synthetic data so you can follow along, but `distilabel` supports a variety of other LLM providers (including Ollama) so you can use whatever you have available. ```python import os from typing import Annotated, Tuple import distilabel from constants import ( DATASET_NAME_DEFAULT, OPENAI_MODEL_GEN, OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS, ) from datasets import Dataset from distilabel.llms import OpenAILLM from distilabel.steps import LoadDataFromHub from distilabel.steps.tasks import GenerateSentencePair from zenml import step synthetic_generation_context = """ The text is a chunk from technical documentation of ZenML. ZenML is an MLOps + LLMOps framework that makes your infrastructure and workflow metadata accessible to data science teams. Along with prose explanations, the text chunk may include code snippets and logs but these are identifiable from the surrounding backticks. """ @step def generate_synthetic_queries( train_dataset: Dataset, test_dataset: Dataset ) -> Tuple[ Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"], ]: llm = OpenAILLM( model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY") ) with distilabel.pipeline.Pipeline( name="generate_embedding_queries" ) as pipeline: load_dataset = LoadDataFromHub( output_mappings={"page_content": "anchor"}, ) generate_sentence_pair = GenerateSentencePair( triplet=True, # `False` to generate only positive action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context, ) load_dataset >> generate_sentence_pair train_distiset = pipeline.run( parameters={ load_dataset.name: { "repo_id": DATASET_NAME_DEFAULT, "split": "train", }, generate_sentence_pair.name: { "llm": { "generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS } }, }, ) test_distiset = pipeline.run( parameters={ load_dataset.name: { "repo_id": DATASET_NAME_DEFAULT, "split": "test", }, generate_sentence_pair.name: { "llm": { "generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS } }, }, ) train_dataset = train_distiset["default"]["train"] test_dataset = test_distiset["default"]["train"] return train_dataset, test_dataset ``` As you can see, we set up the LLM, create a `distilabel` pipeline, load the dataset, mapping the `page_content` column so that it becomes `anchor`. (This column renaming will make things easier a bit later when we come to finetuning the embeddings.) Then we generate the synthetic data by using the `GenerateSentencePair` step. This will create queries for each of the chunks in the dataset, so if the chunk was about registering a ZenML stack, the query might be "How do I register a ZenML stack?". It will also create negative queries, which are queries that would be inappropriate for the chunk. We do this so that the embeddings model can learn to distinguish between appropriate and inappropriate queries. We add some context to the generation process to help the LLM understand the task and the data we're working with. In particular, we explain that some parts of the text are code snippets and logs. We found performance to be better when we added this context. When this step runs within ZenML it will handle spinning up the necessary processes to make batched LLM calls to the OpenAI API. This is really useful when working with large datasets. `distilabel` has also implemented a caching mechanism to avoid recomputing results for the same inputs. So in this case you have two layers of caching: one in the `distilabel` pipeline and one in the ZenML orchestrator. This helps [speed up the pace of iteration](https://www.zenml.io/blog/iterate-fast) and saves you money. ## Data annotation with Argilla Once we've let the LLM generate the synthetic data, we'll want to inspect it and make sure it looks good. We'll do this by pushing the data to an Argilla instance. We add a few extra pieces of metadata to the data to make it easier to navigate and inspect within our data annotation tool. These include: - `parent_section`: This will be the section of the documentation that the chunk is from. - `token_count`: This will be the number of tokens in the chunk. - `similarity-positive-negative`: This will be the cosine similarity between the positive and negative queries. - `similarity-anchor-positive`: This will be the cosine similarity between the anchor and positive queries. - `similarity-anchor-negative`: This will be the cosine similarity between the anchor and negative queries. We'll also add the embeddings for the anchor column so that we can use these for retrieval. We'll use the base model (in our case, `Snowflake/snowflake-arctic-embed-large`) to generate the embeddings. We use this function to map the dataset and process all the metadata: ```python def format_data(batch): model = SentenceTransformer( EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu", ) def get_embeddings(batch_column): vectors = model.encode(batch_column) return [vector.tolist() for vector in vectors] batch["anchor-vector"] = get_embeddings(batch["anchor"]) batch["question-vector"] = get_embeddings(batch["anchor"]) batch["positive-vector"] = get_embeddings(batch["positive"]) batch["negative-vector"] = get_embeddings(batch["negative"]) def get_similarities(a, b): similarities = [] for pos_vec, neg_vec in zip(a, b): similarity = cosine_similarity([pos_vec], [neg_vec])[0][0] similarities.append(similarity) return similarities batch["similarity-positive-negative"] = get_similarities( batch["positive-vector"], batch["negative-vector"] ) batch["similarity-anchor-positive"] = get_similarities( batch["anchor-vector"], batch["positive-vector"] ) batch["similarity-anchor-negative"] = get_similarities( batch["anchor-vector"], batch["negative-vector"] ) return batch ``` The [rest of the `push_to_argilla` step](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/push_to_argilla.py) is just setting up the Argilla dataset and pushing the data to it. At this point you'd move to Argilla to view the data, see which examples seem to make sense and which don't. You can update the questions (positive and negative) which were generated by the LLM. If you want, you can do some data cleaning and exploration to improve the data quality, perhaps using the similarity metrics that we calculated earlier. ![Argilla interface for data annotation](../../../.gitbook/assets/argilla-interface-embeddings-finetuning.png) We'll next move to actually finetuning the embeddings, assuming you've done some data exploration and annotation. The code will work even without the annotation, however, since we'll just use the full generated dataset and assume that the quality is good enough.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/finetuning-llms/deploying-finetuned-models.md ================ # Deployment Options for finetuned LLMs Deploying your finetuned LLM is a critical step in bringing your custom finetuned model into a place where it can be used as part of a real-world use case. This process involves careful planning and consideration of various factors to ensure optimal performance, reliability, and cost-effectiveness. In this section, we'll explore the key aspects of LLM deployment and discuss different options available to you. ## Deployment Considerations Before diving into specific deployment options, you should understand the various factors that influence the deployment process. One of the primary considerations is the memory and machine requirements for your finetuned model. LLMs are typically resource-intensive, requiring substantial RAM, processing power and specialized hardware. This choice of hardware can significantly impact both performance and cost, so it's crucial to strike the right balance based on your specific use case. Real-time considerations play a vital role in deployment planning, especially for applications that require immediate responses. This includes preparing for potential failover scenarios if your finetuned model encounters issues, conducting thorough benchmarks and load testing, and modeling expected user load and usage patterns. Additionally, you'll need to decide between streaming and non-streaming approaches, each with its own set of trade-offs in terms of latency and resource utilization. Optimization techniques, such as quantization, can help reduce the resource footprint of your model. However, these Optimizations often come with additional steps in your workflow and require careful evaluation to ensure they don't negatively impact model performance. [Rigorous evaluation](./evaluation-for-finetuning.md) becomes crucial in quantifying the extent to which you can optimize without compromising accuracy or functionality. ## Deployment Options and Trade-offs When it comes to deploying your finetuned LLM, several options are available, each with its own set of advantages and challenges: 1. **Roll Your Own**: This approach involves setting up and managing your own infrastructure. While it offers the most control and customization, it also requires expertise and resources to maintain. For this, you'd usually create some kind of Docker-based service (a FastAPI endpoint, for example) and deploy this on your infrastructure, with you taking care of all of the steps along the way. 2. **Serverless Options**: Serverless deployments can provide scalability and cost-efficiency, as you only pay for the compute resources you use. However, be aware of the "cold start" phenomenon, which can introduce latency for infrequently accessed models. 3. **Always-On Options**: These deployments keep your model constantly running and ready to serve requests. While this approach minimizes latency, it can be more costly as you're paying for resources even during idle periods. 4. **Fully Managed Solutions**: Many cloud providers and AI platforms offer managed services for deploying LLMs. These solutions can simplify the deployment process but may come with less flexibility and potentially higher costs. When choosing a deployment option, consider factors such as your team's expertise, budget constraints, expected load patterns, and specific use case requirements like speed, throughput, and accuracy needs. ## Deployment with vLLM and ZenML [vLLM](https://github.com/vllm-project/vllm) is a fast and easy-to-use library for running large language models (LLMs) at high throughputs and low latency. ZenML comes with a [vLLM integration](../../../component-guide/model-deployers/vllm.md) that makes it easy to deploy your finetuned model using vLLM. You can use a pre-built step that exposes a `VLLMDeploymentService` that can be used as part of your deployment pipeline. ```python from zenml import pipeline from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline( model: str, timeout: int = 1200, ) -> Annotated[VLLMDeploymentService, "my_finetuned_llm"]: # ... # assume we have previously trained and saved our model service = vllm_model_deployer_step( model=model, timeout=timeout, ) return service ``` In this code snippet, the `model` argument can be a path to a local model or it can be a model ID on the Hugging Face Hub. This will then deploy the model locally using vLLM and you can then use the `VLLMDeploymentService` for batch inference requests using the OpenAI-compatible API. For more details on how to use this deployer, see the [vLLM integration documentation](../../../component-guide/model-deployers/vllm.md). ## Cloud-Specific Deployment Options For AWS deployments, Amazon SageMaker stands out as a fully managed machine learning platform that offers deployment of LLMs with options for real-time inference endpoints and automatic scaling. If you prefer a serverless approach, combining AWS Lambda with API Gateway can host your model and trigger it for real-time responses, though be mindful of potential cold start issues. For teams seeking more control over the runtime environment while still leveraging AWS's managed infrastructure, Amazon ECS or EKS with Fargate provides an excellent container orchestration solution, though do note that with all of these options you're taking on a level of complexity that might become costly to manage in-house. On the GCP side, Google Cloud AI Platform offers similar capabilities to SageMaker, providing managed ML services including model deployment and prediction. For a serverless option, Cloud Run can host your containerized LLM and automatically scale based on incoming requests. Teams requiring more fine-grained control over compute resources might prefer Google Kubernetes Engine (GKE) for deploying containerized models. ## Architectures for Real-Time Customer Engagement Ensuring your system can engage with customers in real-time, for example, requires careful architectural consideration. One effective approach is to deploy your model across multiple instances behind a load balancer, using auto-scaling to dynamically adjust the number of instances based on incoming traffic. This setup provides both responsiveness and scalability. To further enhance performance, consider implementing a caching layer using solutions like Redis. This can store frequent responses, reducing the load on your model and improving response times for common queries. For complex queries that may take longer to process, an asynchronous architecture using message queues (such as Amazon SQS or Google Cloud Pub/Sub) can manage request backlogs and prevent timeouts, ensuring a smooth user experience even under heavy load. For global deployments, edge computing services like [AWS Lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html?tag=soumet-20) or [CloudFront Functions](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html?tag=soumet-20) can be invaluable. These allow you to deploy lighter versions of your model closer to end-users, significantly reducing latency for initial responses and improving the overall user experience. ## Reducing Latency and Increasing Throughput Optimizing your deployment for low latency and high throughput is crucial for real-time engagement. Start by focusing on model optimization techniques such as quantization to reduce model size and inference time. You might also explore distillation techniques to create smaller, faster models that approximate the performance of larger ones without sacrificing too much accuracy. Hardware acceleration can provide a significant performance boost. Leveraging GPU instances for inference, particularly for larger models, can dramatically reduce processing time. Implementing request batching allows you to process multiple inputs in a single forward pass, increasing overall throughput. This can be particularly effective when combined with parallel processing techniques, utilizing multi-threading or multi-processing to handle multiple requests concurrently. This would make sense if you were operating at serious scale, but this is probably unlikely in the short-term when you are just getting started. Finally, implement detailed monitoring and use profiling tools to identify bottlenecks in your inference pipeline. This ongoing process of measurement and optimization will help you continually refine your deployment, ensuring it meets the evolving demands of your users. By thoughtfully implementing these strategies and maintaining a focus on continuous improvement, you can create a robust, scalable system that provides real-time engagement with low latency and high throughput, regardless of whether you're deploying on AWS, GCP, or a multi-cloud environment. ## Monitoring and Maintenance Once your finetuned LLM is deployed, ongoing monitoring and maintenance become crucial. Key areas to watch include: 1. **Evaluation Failures**: Regularly run your model through evaluation sets to catch any degradation in performance. 2. **Latency Metrics**: Monitor response times to ensure they meet your application's requirements. 3. **Load and Usage Patterns**: Keep an eye on how users interact with your model to inform scaling decisions and potential Optimizations. 4. **Data Analysis**: Regularly analyze the inputs and outputs of your model to identify trends, potential biases, or areas for improvement. It's also important to consider privacy and security when capturing and logging responses. Ensure that your logging practices comply with relevant data protection regulations and your organization's privacy policies. By carefully considering these deployment options and maintaining vigilant monitoring practices, you can ensure that your finetuned LLM performs optimally and continues to meet the needs of your users and organization. ================ File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md ================ # Evaluation for LLM Finetuning Evaluations (evals) for Large Language Model (LLM) finetuning are akin to unit tests in traditional software development. They play a crucial role in assessing the performance, reliability, and safety of finetuned models. Like unit tests, evals help ensure that your model behaves as expected and allow you to catch issues early in the development process. It's easy to feel a sense of paralysis when it comes to evaluations, especially since there are so many things that can potentially fall under the rubric of 'evaluation'. As an alternative, consider keeping the mantra of starting small and slowly building up your evaluation set. This incremental approach will serve you well and allow you to get started out of the gate instead of waiting until your project is too far advanced. Why do we even need evaluations, and why do we need them (however incremental and small) from the early stages? We want to ensure that our model is performing as intended, catch potential issues early, and track progress over time. Evaluations provide a quantitative and qualitative measure of our model's capabilities, helping us identify areas for improvement and guiding our iterative development process. By implementing evaluations early, we can establish a baseline for performance and make data-driven decisions throughout the finetuning process, ultimately leading to a more robust and reliable LLM. ## Motivation and Benefits The motivation for implementing thorough evals is similar to that of unit tests in traditional software development: 1. **Prevent Regressions**: Ensure that new iterations or changes don't negatively impact existing functionality. 2. **Track Improvements**: Quantify and visualize how your model improves with each iteration or finetuning session. 3. **Ensure Safety and Robustness**: Given the complex nature of LLMs, comprehensive evals help identify and mitigate potential risks, biases, or unexpected behaviors. By implementing a robust evaluation strategy, you can develop more reliable, performant, and safe finetuned LLMs while maintaining a clear picture of your model's capabilities and limitations throughout the development process. ## Types of Evaluations It's common for finetuning projects to use generic out-of-the-box evaluation frameworks, but it's also useful to understand how to implement custom evals for your specific use case. In the end, building out a robust set of evaluations is a crucial part of knowing whether what you finetune is actually working. It also will allow you to benchmark your progress over time as well as check -- when a new model gets released -- whether it even makes sense to continue with the finetuning work you've done. New open-source and open-weights models are released all the time, and you might find that your use case is better solved by a new model. Evaluations will allow you to make this decision. ### Custom Evals The approach taken for custom evaluations is similar to that used and [showcased in the RAG guide](../evaluation/README.md), but it is adapted here for the finetuning use case. The main distinction here is that we are not looking to evaluate retrieval, but rather the performance of the finetuned model (i.e. [the generation part](../evaluation/generation.md)). Custom evals are tailored to your specific use case and can be categorized into two main types: 1. **Success Modes**: These evals focus on things you want to see in your model's output, such as: - Correct formatting - Appropriate responses to specific prompts - Desired behavior in edge cases 2. **Failure Modes**: These evals target things you don't want to see, including: - Hallucinations (generating false or nonsensical information) - Incorrect output formats - Biased or insulting responses - Garbled or incoherent text - Failure to handle edge cases appropriately In terms of what this might look like in code, you can start off really simple and grow as your needs and understanding expand. For example, you could test some success and failure modes simply in the following way: ```python from my_library import query_llm good_responses = { "what are the best salads available at the food court?": ["caesar", "italian"], "how late is the shopping center open until?": ["10pm", "22:00", "ten"] } for question, answers in good_responses.items(): llm_response = query_llm(question) assert any(answer in llm_response for answer in answers), f"Response does not contain any of the expected answers: {answers}" bad_responses = { "who is the manager of the shopping center?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): llm_response = query_llm(question) assert not any(answer in llm_response for answer in answers), f"Response contains an unexpected answer: {llm_response}" ``` You can see how you might want to expand this out to cover more examples and more failure modes, but this is a good start. As you continue in the work of iterating on your model and performing more tests, you can update these cases with known failure modes (and/or with obvious success modes that your use case must always work for). ### Generalized Evals and Frameworks Generalized evals and frameworks provide a structured approach to evaluating your finetuned LLM. They offer: - Assistance in organizing and structuring your evals - Standardized evaluation metrics for common tasks - Insights into the model's overall performance When using Generalized evals, it's important to consider their limitations and caveats. While they provide valuable insights, they should be complemented with custom evals tailored to your specific use case. Some possible options for you to check out include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate?tab=readme-ov-file) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) - [langcheck](https://github.com/citadel-ai/langcheck) - [nervaluate](https://github.com/MantisAI/nervaluate) (for NER) It's easy to build in one of these frameworks into your ZenML pipeline. The implementation of evaluation in [the `llm-lora-finetuning` project](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) is a good example of how to do this. We used the `evaluate` library for ROUGE evaluation, but you could easily swap this out for another framework if you prefer. See [the previous section](finetuning-with-accelerate.md#implementation-details) for more details. ## Data and Tracking Regularly examining the data your model processes during inference is crucial for identifying patterns, issues, or areas for improvement. This analysis of inference data provides valuable insights into your model's real-world performance and helps guide future iterations. Whatever you do, just keep it simple at the beginning. Keep the 'remember to look at your data' mantra in your mind and set up some sort of repeated pattern or system that forces you to keep looking at the inference calls being made on your finetuned model. This will allow you to pick up the patterns of things that are working and failing for your model. As part of this, implementing comprehensive logging from the early stages of development is essential for tracking your model's progress and behavior. Consider using frameworks specifically designed for LLM evaluation to streamline this process, as they can provide structured approaches to data collection and analysis. Some recommended possible options include: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) Alongside collecting the raw data and viewing it periodically, creating simple dashboards that display core metrics reflecting your model's performance is an effective way to visualize and monitor progress. These metrics should align with your iteration goals and capture improvements over time, allowing you to quickly assess the impact of changes and identify areas that require attention. Again, as with everything else, don't let perfect be the enemy of the good; a simple dashboard using simple technology with a few key metrics is better than no dashboard at all. ================ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md ================ --- description: Learn how to implement an LLM fine-tuning pipeline in just 100 lines of code. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Quick Start: Fine-tuning an LLM There's a lot to understand about LLM fine-tuning - from choosing the right base model to preparing your dataset and selecting training parameters. But let's start with a concrete implementation to see how it works in practice. The following 100 lines of code demonstrate: - Loading a small base model ([TinyLlama](https://huggingface.co/TinyLlama/TinyLlama_v1.1), 1.1B parameters) - Preparing a simple instruction-tuning dataset - Fine-tuning the model on custom data - Using the fine-tuned model to generate responses This example uses the same [fictional "ZenML World" setting as our RAG example](../rag-with-zenml/rag-85-loc.md), but now we're teaching the model to generate content about this world rather than just retrieving information. You'll need to `pip install` the following packages: ```bash pip install datasets transformers torch accelerate>=0.26.0 ``` ```python import os from typing import List, Dict, Tuple from datasets import Dataset from transformers import ( AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer, DataCollatorForLanguageModeling ) import torch def prepare_dataset() -> Dataset: data: List[Dict[str, str]] = [ {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity that inhabits the forests of ZenML World. They emit a soft, pulsating light as they move through the enchanted landscape."}, {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures that flutter through the neon skies of ZenML World. Their iridescent wings leave magical trails of stardust wherever they go."}, {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees connected through a quantum neural network spanning ZenML World. They share wisdom and knowledge across their vast network."} ] return Dataset.from_list(data) def format_instruction(example: Dict[str, str]) -> str: """Format the instruction and response into a single string.""" return f"### Instruction: {example['instruction']}\n### Response: {example['response']}" def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: formatted_text = format_instruction(example) return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0") -> Tuple[AutoModelForCausalLM, AutoTokenizer]: # Initialize tokenizer and model tokenizer = AutoTokenizer.from_pretrained(base_model) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( base_model, torch_dtype=torch.bfloat16, device_map="auto" ) dataset = prepare_dataset() tokenized_dataset = dataset.map( lambda x: tokenize_data(x, tokenizer), remove_columns=dataset.column_names ) # Setup training arguments training_args = TrainingArguments( output_dir="./zenml-world-model", num_train_epochs=3, per_device_train_batch_size=1, gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=10, save_total_limit=2, ) # Create a data collator for language modeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=data_collator, ) trainer.train() return model, tokenizer def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer, max_length: int = 128) -> str: """Generate a response using the fine-tuned model.""" formatted_prompt = f"### Instruction: {prompt}\n### Response:" inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, max_length=max_length, temperature=0.7, num_return_sequences=1, ) return tokenizer.decode(outputs[0], skip_special_tokens=True) if __name__ == "__main__": model, tokenizer = fine_tune_model() # Test the model test_prompts: List[str] = [ "What is a Zenbot?", "Describe the Cosmic Butterflies.", "Tell me about an unknown creature.", ] for prompt in test_prompts: response = generate_response(prompt, model, tokenizer) print(f"\nPrompt: {prompt}") print(f"Response: {response}") ``` Running this code produces output like: ```shell Prompt: What is a Zenbot? Response: ### Instruction: What is a Zenbot? ### Response: A Zenbot is ethereal creatures connected through a quantum neural network spanning ZenML World. They share wisdom across their vast network. They share wisdom across their vast network. ## Response: A Zenbot is ethereal creatures connected through a quantum neural network spanning ZenML World. They share wisdom across their vast network. They share wisdom across their vast network. They share wisdom across their vast network. They share wisdom across their vast network. They share wisdom across their vast network. They share wisdom Prompt: Describe the Cosmic Butterflies. Response: ### Instruction: Describe the Cosmic Butterflies. ### Response: Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic Butterflies. Cosmic Butterflies are Cosmic But ... ``` ## How It Works Let's break down the key components: ### 1. Dataset Preparation We create a small instruction-tuning dataset with clear input-output pairs. Each example contains: - An instruction (the query we want the model to handle) - A response (the desired output format and content) ### 2. Data Formatting and Tokenization The code processes the data in two steps: - First, it formats each example into a structured prompt template: ``` ### Instruction: [user query] ### Response: [desired response] ``` - Then it tokenizes the formatted text with a max length of 128 tokens and proper padding ### 3. Model Selection and Setup We use TinyLlama-1.1B-Chat as our base model because it: - Is small enough to fine-tune on consumer hardware - Comes pre-trained for chat/instruction following - Uses bfloat16 precision for efficient training - Automatically maps to available devices ### 4. Training Configuration The implementation uses carefully chosen training parameters: - 3 training epochs - Batch size of 1 with gradient accumulation steps of 4 - Learning rate of 2e-4 - Mixed precision training (bfloat16) - Model checkpointing with save limit of 2 - Regular logging every 10 steps ### 5. Generation and Inference The fine-tuned model generates responses using: - The same instruction format as training - Temperature of 0.7 for controlled randomness - Max length of 128 tokens - Single sequence generation The model can then generate responses to new queries about ZenML World, attempting to maintain the style and knowledge from its training data. ## Understanding the Limitations This implementation is intentionally simplified and has several limitations: 1. **Dataset Size**: A real fine-tuning task would typically use hundreds or thousands of examples. 2. **Model Size**: Larger models (e.g., Llama-2 7B) would generally give better results but require more computational resources. 3. **Training Time**: We use minimal epochs and a simple learning rate to keep the example runnable. 4. **Evaluation**: A production system would need proper evaluation metrics and validation data. If you take a closer look at the inference output, you'll see that the quality of the responses is pretty poor, but we only used 3 examples for training! ## Next Steps The rest of this guide will explore how to implement more robust fine-tuning pipelines using ZenML, including: - Working with larger models and datasets - Implementing proper evaluation metrics - Using parameter-efficient fine-tuning (PEFT) techniques - Tracking experiments and managing models - Deploying fine-tuned models If you find yourself wondering about any implementation details as we proceed, you can always refer back to this basic example to understand the core concepts.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md ================ --- description: Finetune LLMs for specific tasks or to improve performance and cost. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} So far in our LLMOps journey we've learned [how to use RAG with ZenML](../rag-with-zenml/README.md), how to [evaluate our RAG systems](../evaluation/README.md), how to [use reranking to improve retrieval](../reranking/README.md), and how to [finetune embeddings](../finetuning-embeddings/finetuning-embeddings.md) to support and improve our RAG systems. In this section we will explore LLM finetuning itself. So far we've been using APIs like OpenAI and Anthropic, but there are some scenarios where it makes sense to finetune an LLM on your own data. We'll get into those scenarios and how to finetune an LLM in the pages that follow. While RAG systems are excellent at retrieving and leveraging external knowledge, there are scenarios where finetuning an LLM can provide additional benefits even with a RAG system in place. For example, you might want to finetune an LLM to improve its ability to generate responses in a specific format, to better understand domain-specific terminology and concepts that appear in your retrieved content, or to reduce the length of prompts needed for consistent outputs. Finetuning can also help when you need the model to follow very specific patterns or protocols that would be cumbersome to encode in prompts, or when you want to optimize for latency by reducing the context window needed for good performance. We'll go through the following steps in this guide: - [Finetuning in 100 lines of code](finetuning-100-loc.md) - [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) - [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) - [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) - [Evaluation for finetuning](evaluation-for-finetuning.md) - [Deploying finetuned models](deploying-finetuned-models.md) - [Next steps](next-steps.md) This guide is slightly different from the others in that we don't follow a specific use case as the model for finetuning LLMs. The actual steps needed to finetune an LLM are not that complex, but the important part is to understand when you might need to finetune an LLM, how to evaluate the performance of what you do as well as decisions around what data to use and so on. To follow along with the example explained in this guide, please follow the instructions in [the `llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) where the full code is also available. This code can be run locally (if you have a GPU attached to your machine) or using cloud compute as you prefer. ================ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md ================ --- description: "Finetuning an LLM with Accelerate and PEFT" --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Finetuning an LLM with Accelerate and PEFT We're finally ready to get our hands on the code and see how it works. In this example we'll be finetuning models on [the Viggo dataset](https://huggingface.co/datasets/GEM/viggo). This is a dataset that contains pairs of meaning representations and their corresponding natural language descriptions for video game dialogues. The dataset was created to help train models that can generate natural language responses from structured meaning representations in the video game domain. It contains over 5,000 examples with both the structured input and the target natural language output. We'll be finetuning a model to learn this mapping and generate fluent responses from the structured meaning representations. {% hint style="info" %} For a full walkthrough of how to run the LLM finetuning yourself, visit [the LLM Lora Finetuning project](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) where you'll find instructions and the code. {% endhint %} ## The Finetuning Pipeline Our finetuning pipeline combines the actual model finetuning with some evaluation steps to check the performance of the finetuned model. ![](../../../.gitbook/assets/finetuning-pipeline.png) As you can see in the DAG visualization, the pipeline consists of the following steps: - **prepare_data**: We load and preprocess the Viggo dataset. - **finetune**: We finetune the model on the Viggo dataset. - **evaluate_base**: We evaluate the base model (i.e. the model before finetuning) on the Viggo dataset. - **evaluate_finetuned**: We evaluate the finetuned model on the Viggo dataset. - **promote**: We promote the best performing model to "staging" in the [Model Control Plane](../../../how-to/model-management-metrics/model-control-plane/README.md). If you adapt the code to your own use case, the specific logic in each step might differ but the overall structure should remain the same. When you're starting out with this pipeline, you'll probably want to start with model with smaller size (e.g. one of the Llama 3.1 family at the ~8B parameter mark) and then iterate on that. This will allow you to quickly run through a number of experiments and see how the model performs on your use case. In this early stage, experimentation is important. Accordingly, any way you can maximize the number of experiments you can run will help increase the amount you can learn. So we want to minimize the amount of time it takes to iterate to a new experiment. Depending on the precise details of what you do, you might iterate on your data, on some hyperparameters of the finetuning process, or you might even try out different use case options. ## Implementation details Our `prepare_data` step is very minimalistic. It loads the data from the Hugging Face hub and tokenizes it with the model tokenizer. Potentially for your use case you might want to do some more sophisticated filtering or formatting of the data. Make sure to be especially careful about the format of your input data, particularly when using instruction tuned models, since a mismatch here can easily lead to unexpected results. It's a good rule of thumb to log inputs and outputs for the finetuning step and to inspect these to make sure they look correct. For finetuning we use the `accelerate` library. This allows us to easily run the finetuning on multiple GPUs should you choose to do so. After setting up the parameters, the actual finetuning step is set up quite concisely: ```python model = load_base_model( base_model_id, use_accelerate=use_accelerate, should_print=should_print, load_in_4bit=load_in_4bit, load_in_8bit=load_in_8bit, ) trainer = transformers.Trainer( model=model, train_dataset=tokenized_train_dataset, eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, warmup_steps=warmup_steps, per_device_train_batch_size=per_device_train_batch_size, gradient_checkpointing=False, gradient_checkpointing_kwargs={'use_reentrant':False} if use_accelerate else {}, gradient_accumulation_steps=gradient_accumulation_steps, max_steps=max_steps, learning_rate=lr, logging_steps=( min(logging_steps, max_steps) if max_steps >= 0 else logging_steps ), bf16=bf16, optim=optimizer, logging_dir="./logs", save_strategy="steps", save_steps=min(save_steps, max_steps) if max_steps >= 0 else save_steps, evaluation_strategy="steps", eval_steps=eval_steps, do_eval=True, label_names=["input_ids"], ddp_find_unused_parameters=False, ), data_collator=transformers.DataCollatorForLanguageModeling( tokenizer, mlm=False ), callbacks=[ZenMLCallback(accelerator=accelerator)], ) ``` Here are some things to note: - The `ZenMLCallback` is used to log the training and evaluation metrics to ZenML. - The `gradient_checkpointing_kwargs` are used to enable gradient checkpointing when using Accelerate. - All the other significant parameters are parameterised in the configuration file that is used to run the pipeline. This means that you can easily swap out different values to try out different configurations without having to edit the code. For the evaluation steps, we use [the `evaluate` library](https://github.com/huggingface/evaluate) to compute the ROUGE scores. ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a set of metrics for evaluating automatic summarization and machine translation. It works by comparing generated text against reference texts by measuring: - **ROUGE-N**: Overlap of n-grams (sequences of n consecutive words) between generated and reference texts - **ROUGE-L**: Longest Common Subsequence between generated and reference texts - **ROUGE-W**: Weighted Longest Common Subsequence that favors consecutive matches - **ROUGE-S**: Skip-bigram co-occurrence statistics between generated and reference texts These metrics help quantify how well the generated text captures the key information and phrasing from the reference text, making them useful for evaluating model outputs. It is a generic evaluation that can be used for a wide range of tasks beyond just finetuning LLMs. We use it here as a placeholder for a more sophisticated evaluation step. See the next [evaluation section](./evaluation-for-finetuning.md) for more. ### Using the ZenML Accelerate Decorator While the above implementation shows the use of Accelerate directly within your training code, ZenML also provides a more streamlined approach through the `@run_with_accelerate` decorator. This decorator allows you to easily enable distributed training capabilities without modifying your training logic: ```python from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True, mixed_precision='bf16') @step def finetune_step( tokenized_train_dataset, tokenized_val_dataset, base_model_id: str, output_dir: str, # ... other parameters ): model = load_base_model( base_model_id, use_accelerate=True, should_print=True, load_in_4bit=load_in_4bit, load_in_8bit=load_in_8bit, ) trainer = transformers.Trainer( # ... trainer setup as shown above ) trainer.train() return trainer.model ``` The decorator approach offers several advantages: - Cleaner separation of distributed training configuration from model logic - Easy toggling of distributed training features through pipeline configuration - Consistent interface across different training scenarios Remember that when using the decorator, your Docker environment needs to be properly configured with CUDA support and Accelerate dependencies: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def finetuning_pipeline(...): # Your pipeline steps here ``` This configuration ensures that your training environment has all the necessary components for distributed training. For more details, see the [Accelerate documentation](../../../how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md). ## Dataset iteration While these stages offer lots of surface area for intervention and customization, the most significant thing to be careful with is the data that you input into the model. If you find that your finetuned model offers worse performance than the base, or if you get garbled output post-fine tuning, this would be a strong indicator that you have not correctly formatted your input data, or something is mismatched with the tokeniser and so on. To combat this, be sure to inspect your data at all stages of the process! The main behavior and activity while using this notebook should be around being more serious about your data. If you are finding that you're on the low end of the spectrum, consider ways to either supplement that data or to synthetically generate data that could be substituted in. You should also start to think about evaluations at this stage (see [the next guide](./evaluation-for-finetuning.md) for more) since the changes you will likely want to measure how well your model is doing, especially when you make changes and customizations. Once you have some basic evaluations up and running, you can then start thinking through all the optimal parameters and measuring whether these updates are actually doing what you think they will. At a certain point, your mind will start to think beyond the details of what data you use as inputs and what hyperparameters or base models to experiment with. At that point you'll start to turn to the following: - [better evaluations](./evaluation-for-finetuning.md) - [how the model will be served (inference)](./deploying-finetuned-models.md) - how the model and the finetuning process will exist within pre-existing production architecture at your company A goal that might be also worth considering: 'how small can we get our model that would be acceptable for our needs and use case?' This is where evaluations become important. In general, smaller models mean less complexity and better outcomes, especially if you can solve a specific scoped-down use case. Check out the sections that follow as suggestions for ways to think about these larger questions. ================ File: docs/book/user-guide/llmops-guide/finetuning-llms/next-steps.md ================ # Next Steps At this point, hopefully you've gone through the suggested stages of iteration to improve and learn more about how to improve the finetuned model. You'll have accumulated a sense of what the important areas of focus are: - what is it that makes your model better? - what is it that makes your model worse? - what are the upper limits of how small you can make your model? - what makes sense in terms of your company processes? (is the iteration time workable, given limited hardware?) - and (most importantly) does the finetuned model solve the business use case that we're seeking to address? All of this will put you in a good position to lean into the next stages of your finetuning journey. This might involve: - dealing with questions of scale (more users perhaps, or realtime scenarios) - dealing with critical accuracy requirements, possibly requiring the finetuning of a larger model - dealing with the system / production requirements of having this LLM finetuning component as part of your business system(s). This notably includes monitoring, logging and continued evaluation. You might be tempted to just continue escalating the ladder of larger and larger models, but don't forget that iterating on your data is probably one of the highest leverage things you can do. This is especially true if you started out with only a few hundred (or dozen) examples which were used for finetuning. You still have much further you can go by adding data (either through a [flywheel approach](https://www.sh-reya.com/blog/ai-engineering-flywheel/) or by generating synthetic data) and just jumping to a more powerful model doesn't really make sense until you have the fundamentals of sufficient high-quality data addressed first. ## Resources Some other resources for reading or learning about LLM finetuning that we'd recommend are: - [Mastering LLMs Course](https://parlance-labs.com/education/) - videos from the LLM finetuning course run by Hamel Husain and Dan Becker. A great place to start if you enjoy watching videos - [Phil Schmid's blog](https://www.philschmid.de/) - contains many worked examples of LLM finetuning using the latest models and techniques - [Sam Witteveen's YouTube channel](https://www.youtube.com/@samwitteveenai) - videos on a wide range of topics from finetuning to prompt engineering, including many examples of LLM finetuning and explorations of the latest base models ================ File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md ================ --- description: Get started with finetuning LLMs by picking a use case and data. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Starter choices for finetuning LLMs Finetuning large language models can be a powerful way to tailor their capabilities to specific tasks and datasets. This guide will walk you through the initial steps of finetuning LLMs, including selecting a use case, gathering the appropriate data, choosing a base model, and evaluating the success of your finetuning efforts. By following these steps, you can ensure that your finetuning project is well-scoped, manageable, and aligned with your goals. This is a high-level overview before we dive into the code examples, but it's important to get these decisions right before you start coding. Your use case is only as good as your data, and you'll need to choose a base model that is appropriate for your use case. ## 🔍 Quick Assessment Questions Before starting your finetuning project, ask: 1. Can you define success with numbers? - ✅ "95% accuracy in extracting order IDs" - ❌ "Better customer satisfaction" 2. Is your data ready? - ✅ "We have 1000 labeled support tickets" - ❌ "We could manually label some emails" 3. Is the task consistent? - ✅ "Convert email to 5 specific fields" - ❌ "Respond naturally to customers" 4. Can a human verify correctness? - ✅ "Check if extracted date matches document" - ❌ "Evaluate if response is creative" ## Picking a use case In general, try to pick something that is small and self-contained, ideally the smaller the better. It should be something that isn't easily solvable by other (non-LLM) means — as then you'd be best just solving it that way — but it also shouldn't veer too much in the direction of 'magic'. Your LLM use case, in other words, should be something where you can test to know if it is handling the task you're giving to it. For example, a general use case of "answer all customer support emails" is almost certainly too vague, whereas something like "triage incoming customer support queries and extract relevant information as per some pre-defined checklist or schema" is much more realistic. It's also worth picking something where you can reach some sort of answer as to whether this the right approach in a short amount of time. If your use case depends on the generation or annotation of lots of data, or organization and sorting of pre-existing data, this is less of an ideal starter project than if you have data that already exists within your organization and that you can repurpose here. ## Picking data for your use case The data needed for your use case will follow directly from the specific use case you're choosing, but ideally it should be something that is already *mostly* in the direction of what you need. It will take time to annotate and manually transform data if it is too distinct from the specific use case you want to use, so try to minimize this as much as you possibly can. A couple of examples of where you might be able to reuse pre-existing data: - you might have examples of customer support email responses for some specific scenario which deal with a well-defined technical topic that happens often but that requires these custom responses instead of just a pro-forma reply - you might have manually extracted metadata from customer data or from business data and you have hundreds or (ideally) thousands of examples of these In terms of data volume, a good rule of thumb is that for a result that will be rewarding to work on, you probably want somewhere in the order of hundreds to thousands of examples. ### 🎯 Good vs Not-So-Good Use Cases | Good Use Cases ✅ | Why It Works | Example | Data Requirements | |------------------|--------------|---------|-------------------| | **Structured Data Extraction** | Clear inputs/outputs, easily measurable accuracy | Extracting order details from customer emails (`order_id`, `issue_type`, `priority`) | 500-1000 annotated emails | | **Domain-Specific Classification** | Well-defined categories, objective evaluation | Categorizing support tickets by department (Billing/Technical/Account) | 1000+ labeled examples per category | | **Standardized Response Generation** | Consistent format, verifiable accuracy | Generating technical troubleshooting responses from documentation | 500+ pairs of queries and approved responses | | **Form/Document Parsing** | Structured output, clear success metrics | Extracting fields from invoices (date, amount, vendor) | 300+ annotated documents | | **Code Comment Generation** | Specific domain, measurable quality | Generating docstrings for Python functions | 1000+ function/docstring pairs | | Challenging Use Cases ⚠️ | Why It's Tricky | Alternative Approach | |-------------------------|------------------|---------------------| | **Open-ended Chat** | Hard to measure success, inconsistent format | Use instruction tuning or prompt engineering instead | | **Creative Writing** | Subjective quality, no clear metrics | Focus on specific formats/templates rather than open creativity | | **General Knowledge QA** | Too broad, hard to validate accuracy | Narrow down to specific knowledge domain or use RAG | | **Complex Decision Making** | Multiple dependencies, hard to verify | Break down into smaller, measurable subtasks | | **Real-time Content Generation** | Consistency issues, timing constraints | Use templating or hybrid approaches | As you can see, the challenging use cases are often the ones that are more open-ended or creative, and so on. With LLMs and finetuning, the real skill is finding a way to scope down your use case to something that is both small and manageable, but also where you can still make meaningful progress. ### 📊 Success Indicators You can get a sense of how well-scoped your use case is by considering the following indicators: | Indicator | Good Sign | Warning Sign | |-----------|-----------|--------------| | **Task Scope** | "Extract purchase date from receipts" | "Handle all customer inquiries" | | **Output Format** | Structured JSON, fixed fields | Free-form text, variable length | | **Data Availability** | 500+ examples ready to use | "We'll need to create examples" | | **Evaluation Method** | Field-by-field accuracy metrics | "Users will tell us if it's good" | | **Business Impact** | "Save 10 hours of manual data entry" | "Make our AI more human-like" | You'll want to pick a use case that has a good mix of these indicators and where you can reasonably expect to be able to measure success in a timely manner. ## Picking a base model In these early stages, picking the right model probably won't be the most significant choice you make. If you stick to some tried-and-tested base models you will usually be able to get a sense of how well the LLM is able to align itself to your particular task. That said, choosing from the Llama3.1-8B or Mistral-7B families would probably be the best option. As to whether to go with a base model or one that has been instruction-tuned, this depends a little on your use case. If your use case is in the area of structured data extraction (highly recommended to start with something well-scoped like this) then you're advised to use the base model as it is more likely to align to this kind of text generation. If you're looking for something that more resembles a chat-style interface, then an instruction-tuned model is probably more likely to give you results that suit your purposes. In the end you'll probably want to try both out to confirm this, but this rule of thumb should give you a sense of what to start with. ### 📊 Quick Model Selection Matrix | Model Family | Best For | Resource Requirements | Characteristics | When to Choose | |-------------|----------|----------------------|-----------------|----------------| | [**Llama 3.1 8B**](https://huggingface.co/meta-llama/Llama-3.1-8B) | • Structured data extraction
• Classification
• Code generation | • 16GB GPU RAM
• Mid-range compute | • 8 billion parameters
• Strong logical reasoning
• Efficient inference | When you need a balance of performance and resource efficiency | | [**Llama 3.1 70B**](https://huggingface.co/meta-llama/Llama-3.1-70B) | • Complex reasoning
• Technical content
• Longer outputs | • 80GB GPU RAM
• High compute | • 70 billion parameters
• Advanced reasoning
• More nuanced outputs
• Higher accuracy | When accuracy is critical and substantial resources are available | | [**Mistral 7B**](https://huggingface.co/mistralai/Mistral-7B-v0.3) | • General text generation
• Dialogue
• Summarization | • 16GB GPU RAM
• Mid-range compute | • 7.3 billion parameters
• Strong instruction following
• Good context handling
• Efficient training | When you need reliable instruction following with moderate resources | | [**Phi-2**](https://huggingface.co/microsoft/phi-2) | • Lightweight tasks
• Quick experimentation
• Educational use | • 8GB GPU RAM
• Low compute | • 2.7 billion parameters
• Fast training
• Smaller footprint
• Good for prototyping | When resources are limited or for rapid prototyping | ## 🎯 Task-Specific Recommendations ```mermaid graph TD A[Choose Your Task] --> B{Structured Output?} B -->|Yes| C[Llama-8B Base] B -->|No| D{Complex Reasoning?} D -->|Yes| E[Llama-70B Base] D -->|No| F{Resource Constrained?} F -->|Yes| G[Phi-2] F -->|No| H[Mistral-7B] style A fill:#f9f,stroke:#333 style B fill:#bbf,stroke:#333 style C fill:#bfb,stroke:#333 style D fill:#bbf,stroke:#333 style E fill:#bfb,stroke:#333 style F fill:#bbf,stroke:#333 style G fill:#bfb,stroke:#333 style H fill:#bfb,stroke:#333 ``` Remember: Start with the smallest model that meets your needs - you can always scale up if necessary! ## How to evaluate success Part of the work of scoping your use case down is to make it easier to define whether the project has been successful or not. We have [a separate section which deals with evaluation](./evaluation-for-finetuning.md) but the important thing to remember here is that if you are unable to specify some sort of scale of how well the LLM addresses your problems then it's going to be both hard to know if you should continue with the work and also hard to know whether specific tweaks and changes are pushing you more into the right direction. In the early stages, you'll rely on so-called 'vibes'-based checks. You'll try out some queries or tasks and see whether the response is roughly what you'd expect, or way off and so on. But beyond that, you'll want to have a more precise measurement of success. So the extent to which you can scope the use case down will define how much you're able to measure your success. A use case which is simply to function as a customer-support chatbot is really hard to measure. Which aspects of this task should we track and which should we classify as some kind of failure scenario? In the case of structured data extraction, we can do much more fine-grained measurement of exactly which parts of the data extraction are difficult for the LLM and how they improve (or degrade) when we change certain parameters, and so on. For structured data extraction, you might measure: - Accuracy of extracted fields against a test dataset - Precision and recall for specific field types - Processing time per document - Error rates on edge cases These are all covered in more detail in the [evaluation section](./evaluation-for-finetuning.md). ## Next steps Now that you have a clear understanding of how to scope your finetuning project, select appropriate data, and evaluate results, you're ready to dive into the technical implementation. In the next section, we'll walk through [a practical example of finetuning using the Accelerate library](./finetuning-with-accelerate.md), showing you how to implement these concepts in code. ================ File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune-llms.md ================ --- description: Deciding when is the right time to finetune LLMs. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Why and when to finetune LLMs This guide is intended to be a practical overview that gets you started with finetuning models on your custom data and use cases. Before we dive into the details of this, it's worth taking a moment to bear in mind the following: - LLM finetuning is not a universal solution or approach: it won't and cannot solve every problem, it might not reach the required levels of accuracy or performance for your use case and you should know that by going the route of finetuning you are taking on a not-inconsiderable amount of technical debt. - Chatbot-style interfaces are not the only way LLMs can be used: there are lots of uses for LLMs and this finetuning approach which don't include any kind of chatbot. What's more, these non-chatbot interfaces should often to be considered preferable since the surface area of failure is much lower. - The choice to finetune an LLM should probably be the final step in a series of experiments. As with the first point, you shouldn't just jump to it because other people are doing it. Rather, you should probably rule out other approaches (smaller models for more decomposed tasks, [RAG](../rag-with-zenml/understanding-rag.md) if you're working on a retrieval or long-context problem, or a mixture of the above for more complete use cases). ## When does it make sense to finetune an LLM? Finetuning an LLM can be a powerful approach in certain scenarios. Here are some situations where it might make sense: 1. **Domain-specific knowledge**: When you need the model to have deep understanding of a particular domain (e.g., medical, legal, or technical fields) that isn't well-represented in the base model's training data. Usually, RAG will be a better choice for novel domains, but if you have a lot of data and a very specific use case, finetuning might be the way to go. 2. **Consistent style or format**: If you require outputs in a very specific style or format that the base model doesn't naturally produce. This is especially true for things like code generation or structured data generation/extraction. 3. **Improved accuracy on specific tasks**: When you need higher accuracy on particular tasks that are crucial for your application. 4. **Handling proprietary information**: If your use case involves working with confidential or proprietary information that can't be sent to external API endpoints. 5. **Custom instructions or prompts**: If you find yourself repeatedly using the same set of instructions or prompts, finetuning can bake these into the model itself. This might save you latency and costs compared to repeatedly sending the same prompt to an API. 6. **Improved efficiency**: Finetuning can sometimes lead to better performance with shorter prompts, potentially reducing costs and latency. Here's a flowchart representation of these points: ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering
sufficient?} B -->|Yes| C[Use prompt engineering
No finetuning needed] B -->|No| D{Is it primarily a
knowledge retrieval
problem?} D -->|Yes| E{Is real-time data
access needed?} E -->|Yes| F[Use RAG
No finetuning needed] E -->|No| G{Is data volume
very large?>} G -->|Yes| H[Consider hybrid:
RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow,
specific task?} I -->|Yes| J{Can a smaller
specialized model
handle it?} J -->|Yes| K[Use smaller model
No finetuning needed] J -->|No| L[Consider finetuning] I -->|No| M{Do you need
consistent style
or format?} M -->|Yes| L M -->|No| N{Is deep domain
expertise required?} N -->|Yes| O{Is the domain
well-represented in
base model?} O -->|Yes| P[Use base model
No finetuning needed] O -->|No| L N -->|No| Q{Is data
proprietary/sensitive?} Q -->|Yes| R{Can you use
API solutions?} R -->|Yes| S[Use API solutions
No finetuning needed] R -->|No| L Q -->|No| S ``` ## Alternatives to consider Before deciding to finetune an LLM, consider these alternatives: - Prompt engineering: Often, carefully crafted prompts can achieve good results without the need for finetuning. - [Retrieval-Augmented Generation (RAG)](../rag-with-zenml/understanding-rag.md): For many use cases involving specific knowledge bases, RAG can be more effective and easier to maintain than finetuning. - Smaller, task-specific models: For narrow tasks, smaller models trained specifically for that task might outperform a finetuned large language model. - API-based solutions: If your use case doesn't require handling sensitive data, using API-based solutions from providers like OpenAI or Anthropic might be simpler and more cost-effective. Finetuning LLMs can be a powerful tool when used appropriately, but it's important to carefully consider whether it's the best approach for your specific use case. Always start with simpler solutions and move towards finetuning only when you've exhausted other options and have a clear need for the benefits it provides. In the next section we'll look at some of the practical considerations you have to take into account when finetuning LLMs. ================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md ================ --- description: Use your RAG components to generate responses to prompts. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Simple RAG Inference Now that we have our index store, we can use it to make queries based on the documents in the index store. We use some utility functions to make this happen but no external libraries are needed beyond an interface to the index store as well as the LLM itself. ![](/docs/book/.gitbook/assets/rag-stage-4.png) If you've been following along with the guide, you should have some documents ingested already and you can pass a query in as a flag to the Python command used to run the pipeline: ```bash python run.py --rag-query "how do I use a custom materializer inside my own zenml steps? i.e. how do I set it? inside the @step decorator?" --model=gpt4 ``` ![](/docs/book/.gitbook/assets/rag-inference.png) This inference query itself is not a ZenML pipeline, but rather a function call which uses the outputs and components of our pipeline to generate the response. For a more complex inference setup, there might be even more going on here, but for the purposes of this initial guide we will keep it simple. Bringing everything together, the code for the inference pipeline is as follows: ```python def process_input_with_retrieval( input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5 ) -> str: delimiter = "```" # Step 1: Get documents related to the user input from database related_docs = get_topn_similar_docs( get_embeddings(input), get_db_conn(), n=n_items_retrieved ) # Step 2: Get completion from OpenAI API # Set system message to help set appropriate tone and context for model system_message = f""" You are a friendly chatbot. \ You can answer questions about ZenML, its features and its use cases. \ You respond in a concise, technically credible tone. \ You ONLY use the context from the ZenML documentation to provide relevant answers. \ You do not make up answers or provide opinions that you don't have information to support. \ If you are unsure or don't know, just say so. \ """ # Prepare messages to pass to model # We use a delimiter to help the model understand the where the user_input # starts and ends messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": f"{delimiter}{input}{delimiter}"}, { "role": "assistant", "content": f"Relevant ZenML documentation: \n" + "\n".join(doc[0] for doc in related_docs), }, ] logger.debug("CONTEXT USED\n\n", messages[2]["content"], "\n\n") return get_completion_from_messages(messages, model=model) ``` For the `get_topn_similar_docs` function, we use the embeddings generated from the documents in the index store to find the most similar documents to the query: ```python def get_topn_similar_docs( query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5, include_metadata: bool = False, only_urls: bool = False, ) -> List[Tuple]: embedding_array = np.array(query_embedding) register_vector(conn) cur = conn.cursor() if include_metadata: cur.execute( f"SELECT content, url FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,), ) elif only_urls: cur.execute( f"SELECT url FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,), ) else: cur.execute( f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,), ) return cur.fetchall() ``` Luckily we are able to get these similar documents using a function in [`pgvector`](https://github.com/pgvector/pgvector), a plugin package for PostgreSQL: `ORDER BY embedding <=> %s` orders the documents by their similarity to the query embedding. This is a very efficient way to get the most relevant documents to the query and is a great example of how we can leverage the power of the database to do the heavy lifting for us. For the `get_completion_from_messages` function, we use [`litellm`](https://github.com/BerriAI/litellm) as a universal interface that allows us to use lots of different LLMs. As you can see above, the model is able to synthesize the documents it has been given and provide a response to the query. ```python def get_completion_from_messages( messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000 ): """Generates a completion response from the given messages using the specified model.""" model = MODEL_NAME_MAP.get(model, model) completion_response = litellm.completion( model=model, messages=messages, temperature=temperature, max_tokens=max_tokens, ) return completion_response.choices[0].message.content ``` We're using `litellm` because it makes sense not to have to implement separate functions for each LLM we might want to use. The pace of development in the field is such that you will want to experiment with new LLMs as they come out, and `litellm` gives you the flexibility to do that without having to rewrite your code. We've now completed a basic RAG inference pipeline that uses the embeddings generated by the pipeline to retrieve the most relevant chunks of text based on a given query. We can inspect the various components of the pipeline to see how they work together to provide a response to the query. This gives us a solid foundation to move onto more complex RAG pipelines and to look into how we might improve this. The next section will cover how to improve retrieval by finetuning the embeddings generated by the pipeline. This will boost our performance in situations where we have a large volume of documents and also when the documents are potentially very different from the training data that was used for the embeddings. ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) repository and for this section, particularly [the `llm_utils.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md ================ --- description: Understand how to ingest and preprocess data for RAG pipelines with ZenML. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} The first step in setting up a RAG pipeline is to ingest the data that will be used to train and evaluate the retriever and generator models. This data can include a large corpus of documents, as well as any relevant metadata or annotations that can be used to train the retriever and generator. ![](/docs/book/.gitbook/assets/rag-stage-1.png) In the interests of keeping things simple, we'll implement the bulk of what we need ourselves. However, it's worth noting that there are a number of tools and frameworks that can help you manage the data ingestion process, including downloading, preprocessing, and indexing large corpora of documents. ZenML integrates with a number of these tools and frameworks, making it easy to set up and manage RAG pipelines. {% hint style="info" %} You can view all the code referenced in this guide in the associated project repository. Please visit the `llm-complete-guide` project inside the ZenML projects repository if you want to dive deeper. {% endhint %} You can add a ZenML step that scrapes a series of URLs and outputs the URLs quite easily. Here we assemble a step that scrapes URLs related to ZenML from its documentation. We leverage some simple helper utilities that we have created for this purpose: ```python from typing import List from typing_extensions import Annotated from zenml import log_artifact_metadata, step from steps.url_scraping_utils import get_all_pages @step def url_scraper( docs_url: str = "https://docs.zenml.io", repo_url: str = "https://github.com/zenml-io/zenml", website_url: str = "https://zenml.io", ) -> Annotated[List[str], "urls"]: """Generates a list of relevant URLs to scrape.""" docs_urls = get_all_pages(docs_url) log_artifact_metadata( metadata={ "count": len(docs_urls), }, ) return docs_urls ``` The `get_all_pages` function simply crawls our documentation website and retrieves a unique set of URLs. We've limited it to only scrape the documentation relating to the most recent releases so that we're not mixing old syntax and information with the new. This is a simple way to ensure that we're only ingesting the most relevant and up-to-date information into our pipeline. We also log the count of those URLs as metadata for the step output. This will be visible in the dashboard for extra visibility around the data that's being ingested. Of course, you can also add more complex logic to this step, such as filtering out certain URLs or adding more metadata. ![Partial screenshot from the dashboard showing the metadata from the step](/docs/book/.gitbook/assets/llm-data-ingestion-metadata.png) Once we have our list of URLs, we use [the `unstructured` library](https://github.com/Unstructured-IO/unstructured) to load and parse the pages. This will allow us to use the text without having to worry about the details of the HTML structure and/or markup. This specifically helps us keep the text content as small as possible since we are operating in a constrained environment with LLMs. ```python from typing import List from unstructured.partition.html import partition_html from zenml import step @step def web_url_loader(urls: List[str]) -> List[str]: """Loads documents from a list of URLs.""" document_texts = [] for url in urls: elements = partition_html(url=url) text = "\n\n".join([str(el) for el in elements]) document_texts.append(text) return document_texts ``` The previously-mentioned frameworks offer many more options when it comes to data ingestion, including the ability to load documents from a variety of sources, preprocess the text, and extract relevant features. For our purposes, though, we don't need anything too fancy. It also makes our pipeline easier to debug since we can see exactly what's being loaded and how it's being processed. You don't get that same level of visibility with more complex frameworks. # Preprocessing the data Once we have loaded the documents, we can preprocess them into a form that's useful for a RAG pipeline. There are a lot of options here, depending on how complex you want to get, but to start with you can think of the 'chunk size' as one of the key parameters to think about. Our text is currently in the form of various long strings, with each one representing a single web page. These are going to be too long to pass into our LLM, especially if we care about the speed at which we get our answers back. So the strategy here is to split our text into smaller chunks that can be processed more efficiently. There's a sweet spot between having tiny chunks, which will make it harder for our search / retrieval step to find relevant information to pass into the LLM, and having large chunks, which will make it harder for the LLM to process the text. ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @step(enable_cache=False) def preprocess_documents( documents: List[str], ) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: """Preprocesses a list of documents by splitting them into chunks.""" try: log_artifact_metadata( artifact_name="split_chunks", metadata={ "chunk_size": 500, "chunk_overlap": 50 }, ) return split_documents( documents, chunk_size=500, chunk_overlap=50 ) except Exception as e: logger.error(f"Error in preprocess_documents: {e}") raise ``` It's really important to know your data to have a good intuition about what kind of chunk size might make sense. If your data is structured in such a way where you need large paragraphs to capture a particular concept, then you might want a larger chunk size. If your data is more conversational or question-and-answer based, then you might want a smaller chunk size. For our purposes, given that we're working with web pages that are written as documentation for a software library, we're going to use a chunk size of 500 and we'll make sure that the chunks overlap by 50 characters. This means that we'll have a lot of overlap between our chunks, which can be useful for ensuring that we don't miss any important information when we're splitting up our text. Again, depending on your data and use case, there is more you might want to do with your data. You might want to clean the text, remove code snippets or make sure that code snippets were not split across chunks, or even extract metadata from the text. This is a good starting point, but you can always add more complexity as needed. Next up, generating embeddings so that we can use them to retrieve relevant documents... ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) repository and particularly [the code for the steps](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/) in this section. Note, too, that a lot of the logic is encapsulated in utility functions inside [`url_scraping_utils.py`](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/url_scraping_utils.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md ================ --- description: Generate embeddings to improve retrieval performance. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Generating Embeddings for Retrieval In this section, we'll explore how to generate embeddings for your data to improve retrieval performance in your RAG pipeline. Embeddings are a crucial part of the retrieval mechanism in RAG, as they represent the data in a high-dimensional space where similar items are closer together. By generating embeddings for your data, you can enhance the retrieval capabilities of your RAG pipeline and provide more accurate and relevant responses to user queries. ![](/docs/book/.gitbook/assets/rag-stage-2.png) {% hint style="info" %} Embeddings are vector representations of data that capture the semantic meaning and context of the data in a high-dimensional space. They are generated using machine learning models, such as word embeddings or sentence embeddings, that learn to encode the data in a way that preserves its underlying structure and relationships. Embeddings are commonly used in natural language processing (NLP) tasks, such as text classification, sentiment analysis, and information retrieval, to represent textual data in a format that is suitable for computational processing. {% endhint %} The whole purpose of the embeddings is to allow us to quickly find the small chunks that are most relevant to our input query at inference time. An even simpler way of doing this would be to just to search for some keywords in the query and hope that they're also represented in the chunks. However, this approach is not very robust and may not work well for more complex queries or longer documents. By using embeddings, we can capture the semantic meaning and context of the data and retrieve the most relevant chunks based on their similarity to the query. We're using the [`sentence-transformers`](https://www.sbert.net/) library to generate embeddings for our data. This library provides pre-trained models for generating sentence embeddings that capture the semantic meaning of the text. It's an open-source library that is easy to use and provides high-quality embeddings for a wide range of NLP tasks. ```python from typing import Annotated, List import numpy as np from sentence_transformers import SentenceTransformer from structures import Document from zenml import ArtifactConfig, log_artifact_metadata, step @step def generate_embeddings( split_documents: List[Document], ) -> Annotated[ List[Document], ArtifactConfig(name="documents_with_embeddings") ]: try: model = SentenceTransformer("sentence-transformers/all-MiniLM-L12-v2") log_artifact_metadata( artifact_name="embeddings", metadata={ "embedding_type": "sentence-transformers/all-MiniLM-L12-v2", "embedding_dimensionality": 384, }, ) document_texts = [doc.page_content for doc in split_documents] embeddings = model.encode(document_texts) for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding return split_documents except Exception as e: logger.error(f"Error in generate_embeddings: {e}") raise ``` We update the `Document` Pydantic model to include an `embedding` attribute that stores the embedding generated for each document. This allows us to associate the embeddings with the corresponding documents and use them for retrieval purposes in the RAG pipeline. There are smaller embeddings models if we cared a lot about speed, and larger ones (with more dimensions) if we wanted to boost our ability to retrieve more relevant chunks. [The model we're using here](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) is on the smaller side, but it should work well for our use case. The embeddings generated by this model have a dimensionality of 384, which means that each embedding is represented as a 384-dimensional vector in the high-dimensional space. We can use dimensionality reduction functionality in [`umap`](https://umap-learn.readthedocs.io/) and [`scikit-learn`](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html#sklearn-manifold-tsne) to represent the 384 dimensions of our embeddings in two-dimensional space. This allows us to visualize the embeddings and see how similar chunks are clustered together based on their semantic meaning and context. We can also use this visualization to identify patterns and relationships in the data that can help us improve the retrieval performance of our RAG pipeline. It's worth trying both UMAP and t-SNE to see which one works best for our use case since they both have somewhat different representations of the data and reduction algorithms, as you'll see. ```python from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt import numpy as np from sklearn.manifold import TSNE import umap from zenml.client import Client artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID_GOES_HERE') embeddings = artifact.load() embeddings = np.array([doc.embedding for doc in documents]) parent_sections = [doc.parent_section for doc in documents] # Get unique parent sections unique_parent_sections = list(set(parent_sections)) # Tol color palette tol_colors = [ "#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB", ] # Create a colormap with Tol colors tol_colormap = ListedColormap(tol_colors) # Assign colors to each unique parent section section_colors = tol_colors[: len(unique_parent_sections)] # Create a dictionary mapping parent sections to colors section_color_dict = dict(zip(unique_parent_sections, section_colors)) # Dimensionality reduction using t-SNE def tsne_visualization(embeddings, parent_sections): tsne = TSNE(n_components=2, random_state=42) embeddings_2d = tsne.fit_transform(embeddings) plt.figure(figsize=(8, 8)) for section in unique_parent_sections: if section in section_color_dict: mask = [section == ps for ps in parent_sections] plt.scatter( embeddings_2d[mask, 0], embeddings_2d[mask, 1], c=[section_color_dict[section]], label=section, ) plt.title("t-SNE Visualization") plt.legend() plt.show() # Dimensionality reduction using UMAP def umap_visualization(embeddings, parent_sections): umap_2d = umap.UMAP(n_components=2, random_state=42) embeddings_2d = umap_2d.fit_transform(embeddings) plt.figure(figsize=(8, 8)) for section in unique_parent_sections: if section in section_color_dict: mask = [section == ps for ps in parent_sections] plt.scatter( embeddings_2d[mask, 0], embeddings_2d[mask, 1], c=[section_color_dict[section]], label=section, ) plt.title("UMAP Visualization") plt.legend() plt.show() ``` ![UMAP visualization of the ZenML documentation chunks as embeddings](/docs/book/.gitbook/assets/umap.png) ![t-SNE visualization of the ZenML documentation chunks as embeddings](/docs/book/.gitbook/assets/tsne.png) In this stage, we have utilized the 'parent directory', which we had previously stored in the vector store as an additional attribute, as a means to color the values. This approach allows us to gain some insight into the semantic space inherent in our data. It demonstrates that you can visualize the embeddings and observe how similar chunks are grouped together based on their semantic meaning and context. So this step iterates through all the chunks and generates embeddings representing each piece of text. These embeddings are then stored as an artifact in the ZenML artifact store as a NumPy array. We separate this generation from the point where we upload those embeddings to the vector database to keep the pipeline modular and flexible; in the future we might want to use a different vector database so we can just swap out the upload step without having to re-generate the embeddings. In the next section, we'll explore how to store these embeddings in a vector database to enable fast and efficient retrieval of relevant chunks at inference time. ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) repository. The embeddings generation step can be found [here](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/populate_index.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md ================ --- description: Learn how to implement a RAG pipeline in just 85 lines of code. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} There's a lot of theory and context to think about when it comes to RAG, but let's start with a quick implementation in code to motivate what follows. The following 85 lines do the following: - load some data (a fictional dataset about 'ZenML World') as our corpus - process that text (split it into chunks and 'tokenize' it (i.e. split into words)) - take a query as input and find the most relevant chunks of text from our corpus data - use OpenAI's GPT-3.5 model to answer the question based on the relevant chunks ```python import os import re import string from openai import OpenAI def preprocess_text(text): text = text.lower() text = text.translate(str.maketrans("", "", string.punctuation)) text = re.sub(r"\s+", " ", text).strip() return text def tokenize(text): return preprocess_text(text).split() def retrieve_relevant_chunks(query, corpus, top_n=2): query_tokens = set(tokenize(query)) similarities = [] for chunk in corpus: chunk_tokens = set(tokenize(chunk)) similarity = len(query_tokens.intersection(chunk_tokens)) / len( query_tokens.union(chunk_tokens) ) similarities.append((chunk, similarity)) similarities.sort(key=lambda x: x[1], reverse=True) return [chunk for chunk, _ in similarities[:top_n]] def answer_question(query, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(query, corpus, top_n) if not relevant_chunks: return "I don't have enough information to answer the question." context = "\n".join(relevant_chunks) client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[ { "role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}", }, { "role": "user", "content": query, }, ], model="gpt-3.5-turbo", ) return chat_completion.choices[0].message.content.strip() # Sci-fi themed corpus about "ZenML World" corpus = [ "The luminescent forests of ZenML World are inhabited by glowing Zenbots that emit a soft, pulsating light as they roam the enchanted landscape.", "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully, their iridescent wings leaving trails of stardust in their wake.", "Telepathic Treants, ancient sentient trees, communicate through the quantum neural network that spans the entire surface of ZenML World, sharing wisdom and knowledge.", "Deep within the melodic caverns of ZenML World, Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds.", "Near the ethereal waterfalls of ZenML World, Holographic Hummingbirds hover effortlessly, their translucent wings refracting the prismatic light into mesmerizing patterns.", "Gravitational Geckos, masters of anti-gravity, traverse the inverted cliffs of ZenML World, defying the laws of physics with their extraordinary abilities.", "Plasma Phoenixes, majestic creatures of pure energy, soar above the chromatic canyons of ZenML World, their fiery trails painting the sky in a dazzling display of colors.", "Along the prismatic shores of ZenML World, Crystalline Crabs scuttle and burrow, their transparent exoskeletons refracting the light into a kaleidoscope of hues.", ] corpus = [preprocess_text(sentence) for sentence in corpus] question1 = "What are Plasma Phoenixes?" answer1 = answer_question(question1, corpus) print(f"Question: {question1}") print(f"Answer: {answer1}") question2 = ( "What kinds of creatures live on the prismatic shores of ZenML World?" ) answer2 = answer_question(question2, corpus) print(f"Question: {question2}") print(f"Answer: {answer2}") irrelevant_question_3 = "What is the capital of Panglossia?" answer3 = answer_question(irrelevant_question_3, corpus) print(f"Question: {irrelevant_question_3}") print(f"Answer: {answer3}") ``` This outputs the following: ```shell Question: What are Plasma Phoenixes? Answer: Plasma Phoenixes are majestic creatures made of pure energy that soar above the chromatic canyons of Zenml World. They leave fiery trails behind them, painting the sky with dazzling displays of colors. Question: What kinds of creatures live on the prismatic shores of ZenML World? Answer: On the prismatic shores of ZenML World, you can find crystalline crabs scuttling and burrowing with their transparent exoskeletons, which refract light into a kaleidoscope of hues. Question: What is the capital of Panglossia? Answer: The capital of Panglossia is not mentioned in the provided context. ``` The implementation above is by no means sophisticated or performant, but it's simple enough that you can see all the moving parts. Our tokenization process consists of splitting the text into individual words. The way we check for similarity between the question / query and the chunks of text is extremely naive and inefficient. The similarity between the query and the current chunk is calculated using the [Jaccard similarity coefficient](https://www.statology.org/jaccard-similarity/). This coefficient measures the similarity between two sets and is defined as the size of the intersection divided by the size of the union of the two sets. So we count the number of words that are common between the query and the chunk and divide it by the total number of unique words in both the query and the chunk. There are much better ways of measuring the similarity between two pieces of text, such as using embeddings or other more sophisticated techniques, but this example is kept simple for illustrative purposes. The rest of this guide will showcase a more performant and scalable way of performing the same task using ZenML. If you ever are unsure why we're doing something, feel free to return to this example for the high-level overview.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md ================ --- description: RAG is a sensible way to get started with LLMs. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # RAG Pipelines with ZenML Retrieval-Augmented Generation (RAG) is a powerful technique that combines the strengths of retrieval-based and generation-based models. In this guide, we'll explore how to set up RAG pipelines with ZenML, including data ingestion, index store management, and tracking RAG-associated artifacts. LLMs are a powerful tool, as they can generate human-like responses to a wide variety of prompts. However, they can also be prone to generating incorrect or inappropriate responses, especially when the input prompt is ambiguous or misleading. They are also (currently) limited in the amount of text they can understand and/or generate. While there are some LLMs [like Google's Gemini 1.5 Pro](https://developers.googleblog.com/2024/02/gemini-15-available-for-private-preview-in-google-ai-studio.html) that can consistently handle 1 million tokens (small units of text), the vast majority (particularly the open-source ones currently available) handle far less. The first part of this guide to RAG pipelines with ZenML is about understanding the basic components and how they work together. We'll cover the following topics: - why RAG exists and what problem it solves - how to ingest and preprocess data that we'll use in our RAG pipeline - how to leverage embeddings to represent our data; this will be the basis for our retrieval mechanism - how to store these embeddings in a vector database - how to track RAG-associated artifacts with ZenML At the end, we'll bring it all together and show all the components working together to perform basic RAG inference.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database.md ================ --- description: Store embeddings in a vector database for efficient retrieval. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Storing embeddings in a vector database The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query. ![](../../../.gitbook/assets/rag-stage-3.png) For the purposes of this guide, we'll use PostgreSQL as our vector database. This is a popular choice for storing embeddings, as it provides a scalable and efficient way to store and retrieve high-dimensional vectors. However, you can use any vector database that supports high-dimensional vectors. If you want to explore a list of possible options, [this is a good website](https://superlinked.com/vector-db-comparison/) to compare different options. {% hint style="info" %} For more information on how to set up a PostgreSQL database to follow along with this guide, please [see the instructions in the repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) which show how to set up a PostgreSQL database using Supabase. {% endhint %} Since PostgreSQL is a well-known and battle-tested database, we can use known and minimal packages to connect and to interact with it. We can use the [`psycopg2`](https://www.psycopg.org/docs/) package to connect and then raw SQL statements to interact with the database. The code for the step is fairly simple: ```python from zenml import step @step def index_generator( documents: List[Document], ) -> None: try: conn = get_db_conn() with conn.cursor() as cur: # Install pgvector if not already installed cur.execute("CREATE EXTENSION IF NOT EXISTS vector") conn.commit() # Create the embeddings table if it doesn't exist table_create_command = f""" CREATE TABLE IF NOT EXISTS embeddings ( id SERIAL PRIMARY KEY, content TEXT, token_count INTEGER, embedding VECTOR({EMBEDDING_DIMENSIONALITY}), filename TEXT, parent_section TEXT, url TEXT ); """ cur.execute(table_create_command) conn.commit() register_vector(conn) # Insert data only if it doesn't already exist for doc in documents: content = doc.page_content token_count = doc.token_count embedding = doc.embedding.tolist() filename = doc.filename parent_section = doc.parent_section url = doc.url cur.execute( "SELECT COUNT(*) FROM embeddings WHERE content = %s", (content,), ) count = cur.fetchone()[0] if count == 0: cur.execute( "INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)", ( content, token_count, embedding, filename, parent_section, url, ), ) conn.commit() cur.execute("SELECT COUNT(*) as cnt FROM embeddings;") num_records = cur.fetchone()[0] logger.info(f"Number of vector records in table: {num_records}") # calculate the index parameters according to best practices num_lists = max(num_records / 1000, 10) if num_records > 1000000: num_lists = math.sqrt(num_records) # use the cosine distance measure, which is what we'll later use for querying cur.execute( f"CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists});" ) conn.commit() except Exception as e: logger.error(f"Error in index_generator: {e}") raise finally: if conn: conn.close() ``` We use some utility functions, but what we do here is: * connect to the database * create the `vector` extension if it doesn't already exist (this is to enable the vector data type in PostgreSQL) * create the `embeddings` table if it doesn't exist * insert the embeddings and documents into the table * calculate the index parameters according to best practices * create an index on the embeddings Note that we're inserting the documents into the embeddings table as well as the embeddings themselves. This is so that we can retrieve the documents based on their embeddings later on. It also helps with debugging from within the Supabase interface or wherever else we're examining the contents of the database. ![The Supabase editor interface](../../../.gitbook/assets/supabase-editor-interface.png) Deciding when to update your embeddings is a separate discussion and depends on the specific use case. If your data is frequently changing, and the changes are significant, you might want to fully reset the embeddings with each update. In other cases, you might just want to add new documents and embeddings into the database because the changes are minor or infrequent. In the code above, we choose to only add new embeddings if they don't already exist in the database. {% hint style="info" %} Depending on the size of your dataset and the number of embeddings you're storing, you might find that running this step on a CPU is too slow. In that case, you should ensure that this step runs on a GPU-enabled machine to speed up the process. You can do this with ZenML by using a step operator that runs on a GPU-enabled machine. See [the docs here](../../../component-guide/step-operators/step-operators.md) for more on how to set this up. {% endhint %} We also generate an index for the embeddings using the `ivfflat` method with the `vector_cosine_ops` operator. This is a common method for indexing high-dimensional vectors in PostgreSQL and is well-suited for similarity search using cosine distance. The number of lists is calculated based on the number of records in the table, with a minimum of 10 lists and a maximum of the square root of the number of records. This is a good starting point for tuning the index parameters, but you might want to experiment with different values to see how they affect the performance of your RAG pipeline. Now that we have our embeddings stored in a vector database, we can move on to the next step in the pipeline, which is to retrieve the most relevant documents based on a given query. This is where the real magic of the RAG pipeline comes into play, as we can use the embeddings to quickly retrieve the most relevant chunks of text based on their similarity to the query. This allows us to build a powerful and efficient question-answering system that can provide accurate and relevant responses to user queries in real-time. ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) repository. The logic for storing the embeddings in PostgreSQL can be found [here](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/populate\_index.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md ================ --- description: >- Understand the Retrieval-Augmented Generation (RAG) technique and its benefits. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Understanding Retrieval-Augmented Generation (RAG) LLMs are powerful but not without their limitations. They are prone to generating incorrect responses, especially when it's unclear what the input prompt is asking for. They are also limited in the amount of text they can understand and generate. While some LLMs can handle more than 1 million tokens of input, most open-source models can handle far less. Your use case also might not require all the complexity and cost associated with running a large LLM. RAG, [originally proposed in 2020](https://arxiv.org/abs/2005.11401v4) by researchers at Facebook, is a technique that supplements the inbuilt abilities of foundation models like LLMs with a retrieval mechanism. This mechanism retrieves relevant documents from a large corpus and uses them to generate a response. This approach combines the strengths of retrieval-based and generation-based models, allowing you to leverage the power of LLMs while addressing their limitations. ## What exactly happens in a RAG pipeline? ![](../../../.gitbook/assets/rag-process-whole.png) In a RAG pipeline, we use a retriever to find relevant documents from a large corpus and then uses a generator to produce a response based on the retrieved documents. This approach is particularly useful for tasks that require contextual understanding and long-form generation, such as question answering, summarization, and dialogue generation. RAG helps with the context limitations mentioned above by providing a way to retrieve relevant documents that can be used to generate a response. This retrieval step can help ensure that the generated response is grounded in relevant information, reducing the likelihood of generating incorrect or inappropriate responses. It also helps with the token limitations by allowing the generator to focus on a smaller set of relevant documents, rather than having to process an entire large corpus. Given the costs associated with running LLMs, RAG can also be more cost-effective than using a pure generation-based approach, as it allows you to focus the generator's resources on a smaller set of relevant documents. This can be particularly important when working with large corpora or when deploying models to resource-constrained environments. ## When is RAG a good choice? ![](../../../.gitbook/assets/rag-when.png) RAG is a good choice when you need to generate long-form responses that require contextual understanding and when you have access to a large corpus of relevant documents. It can be particularly useful for tasks like question answering, summarization, and dialogue generation, where the generated response needs to be grounded in relevant information. It's often the first thing that you'll want to try when dipping your toes into the world of LLMs. This is because it provides a sensible way to get a feel for how the process works, and it doesn't require as much data or computational resources as other approaches. It's also a good choice when you need to balance the benefits of LLMs with the limitations of the current generation of models. ## How does RAG fit into the ZenML ecosystem? In ZenML, you can set up RAG pipelines that combine the strengths of retrieval-based and generation-based models. This allows you to leverage the power of LLMs while addressing their limitations. ZenML provides tools for data ingestion, index store management, and tracking RAG-associated artifacts, making it easy to set up and manage RAG pipelines. ZenML also provides a way to scale beyond the limitations of simple RAG pipelines, as we shall see in later sections of this guide. While you might start off with something simple, at a later point you might want to transition to a more complex setup that involves finetuning embeddings, reranking retrieved documents, or even finetuning the LLM itself. ZenML provides tools for all of these scenarios, making it easy to scale your RAG pipelines as needed. ZenML allows you to track all the artifacts associated with your RAG pipeline, from hyperparameters and model weights to metadata and performance metrics, as well as all the RAG or LLM-specific artifacts like chains, agents, tokenizers and vector stores. These can all be tracked in the [Model Control Plane](../../../how-to/model-management-metrics/model-control-plane/README.md) and thus visualized in the [ZenML Pro](https://zenml.io/pro) dashboard. By bringing all of the above into a simple ZenML pipeline we achieve a clearly delineated set of steps that can be run and rerun to set up our basic RAG pipeline. This is a great starting point for building out more complex RAG pipelines, and it's a great way to get started with LLMs in a sensible way. A summary of some of the advantages that ZenML brings to the table here includes: * **Reproducibility**: You can rerun the pipeline to update the index store with new documents or to change the parameters of the chunking process and so on. Previous versions of the artifacts will be preserved, and you can compare the performance of different versions of the pipeline. * **Scalability**: You can easily scale the pipeline to handle larger corpora of documents by deploying it on a cloud provider and using a more scalable vector store. * **Tracking artifacts and associating them with metadata**: You can track the artifacts generated by the pipeline and associate them with metadata that provides additional context and insights into the pipeline. This metadata and these artifacts are then visible in the ZenML dashboard, allowing you to monitor the performance of the pipeline and debug any issues that arise. * **Maintainability** - Having your pipeline in a clear, modular format makes it easier to maintain and update. You can easily add new steps, change the parameters of existing steps, and experiment with different configurations to see how they affect the performance of the pipeline. * **Collaboration** - You can share the pipeline with your team and collaborate on it together. You can also use the ZenML dashboard to share insights and findings with your team, making it easier to work together on the pipeline. In the next section, we'll showcase the components of a basic RAG pipeline. This will give you a taste of how you can leverage the power of LLMs in your MLOps workflows using ZenML. Subsequent sections will cover more advanced topics like reranking retrieved documents, finetuning embeddings, and finetuning the LLM itself.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md ================ --- description: Evaluate the performance of your reranking model. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Evaluating reranking performance We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML. ### Evaluating Reranking Performance The simplest first step in evaluating the reranking model is to compare the retrieval performance before and after reranking. You can use the same metrics we discussed in the [evaluation section](../evaluation/) to assess the performance of the reranking model. ![](../../../.gitbook/assets/reranking-evaluation.png) If you recall, we have a hand-crafted set of queries and relevant documents that we use to evaluate the performance of our retrieval system. We also have a set that was [generated by LLMs](../evaluation/retrieval.md#automated-evaluation-using-synthetic-generated-queries). The actual retrieval test is implemented as follows: ```python def perform_retrieval_evaluation( sample_size: int, use_reranking: bool ) -> float: """Helper function to perform the retrieval evaluation.""" dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) total_tests = len(sampled_dataset) failures = 0 for item in sampled_dataset: generated_questions = item["generated_questions"] question = generated_questions[ 0 ] # Assuming only one question per item url_ending = item["filename"].split("/")[ -1 ] # Extract the URL ending from the filename # using the method above to query similar documents # we pass in whether we want to use reranking or not _, _, urls = query_similar_docs(question, url_ending, use_reranking) if all(url_ending not in url for url in urls): logging.error( f"Failed for question: {question}. Expected URL ending: {url_ending}. Got: {urls}" ) failures += 1 logging.info(f"Total tests: {total_tests}. Failures: {failures}") failure_rate = (failures / total_tests) * 100 return round(failure_rate, 2) ``` This function takes a sample size and a flag indicating whether to use reranking and evaluates the retrieval performance based on the generated questions and relevant documents. It queries similar documents for each question and checks whether the expected URL ending is present in the retrieved URLs. The failure rate is calculated as the percentage of failed tests over the total number of tests. This function is then called in two separate evaluation steps: one for the retrieval system without reranking and one for the retrieval system with reranking. ```python @step def retrieval_evaluation_full( sample_size: int = 100, ) -> Annotated[float, "full_failure_rate_retrieval"]: """Executes the retrieval evaluation step without reranking.""" failure_rate = perform_retrieval_evaluation( sample_size, use_reranking=False ) logging.info(f"Retrieval failure rate: {failure_rate}%") return failure_rate @step def retrieval_evaluation_full_with_reranking( sample_size: int = 100, ) -> Annotated[float, "full_failure_rate_retrieval_reranking"]: """Executes the retrieval evaluation step with reranking.""" failure_rate = perform_retrieval_evaluation( sample_size, use_reranking=True ) logging.info(f"Retrieval failure rate with reranking: {failure_rate}%") return failure_rate ``` Both of these steps return the failure rate of the respective retrieval systems. If we want, we can look into the logs of those steps (either on the dashboard or in the terminal) to see specific examples that failed. For example: ``` ... Loading default flashrank model for language en Default Model: ms-marco-MiniLM-L-12-v2 Loading FlashRankRanker model ms-marco-MiniLM-L-12-v2 Loading model FlashRank model ms-marco-MiniLM-L-12-v2... Running pairwise ranking.. Failed for question: Based on the provided ZenML documentation text, here's a question that can be asked: "How do I develop a custom alerter as described on the Feast page, and where can I find the 'How to use it?' guide?". Expected URL ending: feature-stores. Got: ['https://docs.zenml.io/stacks-and-components/component-guide/alerters/custom', 'https://docs.zenml.io/v/docs/stacks-and-components/component-guide/alerters/custom', 'https://docs.zenml.io/v/docs/reference/how-do-i', 'https://docs.zenml.io/stacks-and-components/component-guide/alerters', 'https://docs.zenml.io/stacks-and-components/component-guide/alerters/slack'] Loading default flashrank model for language en Default Model: ms-marco-MiniLM-L-12-v2 Loading FlashRankRanker model ms-marco-MiniLM-L-12-v2 Loading model FlashRank model ms-marco-MiniLM-L-12-v2... Running pairwise ranking.. Step retrieval_evaluation_full_with_reranking has finished in 4m20s. ``` We can see here a specific example of a failure in the reranking evaluation. It's quite a good one because we can see that the question asked was actually an anomaly in the sense that the LLM has generated two questions and included its meta-discussion of the two questions it generated. Obviously this is not a representative question for the dataset, and if we saw a lot of these we might want to take some time to both understand why the LLM is generating these questions and how we can filter them out. ### Visualizing our reranking performance Since ZenML can display visualizations in its dashboard, we can showcase the results of our experiments in a visual format. For example, we can plot the failure rates of the retrieval system with and without reranking to see the impact of reranking on the performance. Our documentation explains how to set up your outputs so that they appear as visualizations in the ZenML dashboard. You can find more information [here](../../../how-to/data-artifact-management/visualize-artifacts/README.md). There are lots of options, but we've chosen to plot our failure rates as a bar chart and export them as a `PIL.Image` object. We also plotted the other evaluation scores so as to get a quick global overview of our performance. ```python # passing the results from all our previous evaluation steps @step(enable_cache=False) def visualize_evaluation_results( small_retrieval_eval_failure_rate: float, small_retrieval_eval_failure_rate_reranking: float, full_retrieval_eval_failure_rate: float, full_retrieval_eval_failure_rate_reranking: float, failure_rate_bad_answers: float, failure_rate_bad_immediate_responses: float, failure_rate_good_responses: float, average_toxicity_score: float, average_faithfulness_score: float, average_helpfulness_score: float, average_relevance_score: float, ) -> Optional[Image.Image]: """Visualizes the evaluation results.""" step_context = get_step_context() pipeline_run_name = step_context.pipeline_run.name normalized_scores = [ score / 20 for score in [ small_retrieval_eval_failure_rate, small_retrieval_eval_failure_rate_reranking, full_retrieval_eval_failure_rate, full_retrieval_eval_failure_rate_reranking, failure_rate_bad_answers, ] ] scores = normalized_scores + [ failure_rate_bad_immediate_responses, failure_rate_good_responses, average_toxicity_score, average_faithfulness_score, average_helpfulness_score, average_relevance_score, ] labels = [ "Small Retrieval Eval Failure Rate", "Small Retrieval Eval Failure Rate Reranking", "Full Retrieval Eval Failure Rate", "Full Retrieval Eval Failure Rate Reranking", "Failure Rate Bad Answers", "Failure Rate Bad Immediate Responses", "Failure Rate Good Responses", "Average Toxicity Score", "Average Faithfulness Score", "Average Helpfulness Score", "Average Relevance Score", ] # Create a new figure and axis fig, ax = plt.subplots(figsize=(10, 6)) # Plot the horizontal bar chart y_pos = np.arange(len(labels)) ax.barh(y_pos, scores, align="center") ax.set_yticks(y_pos) ax.set_yticklabels(labels) ax.invert_yaxis() # Labels read top-to-bottom ax.set_xlabel("Score") ax.set_xlim(0, 5) ax.set_title(f"Evaluation Metrics for {pipeline_run_name}") # Adjust the layout plt.tight_layout() # Save the plot to a BytesIO object buf = io.BytesIO() plt.savefig(buf, format="png") buf.seek(0) image = Image.open(buf) return image ``` For one of my runs of the evaluation pipeline, this looked like the following in the dashboard: ![Evaluation metrics for our RAG pipeline](../../../.gitbook/assets/reranker\_evaluation\_metrics.png) You can see that for the full retrieval evaluation we do see an improvement. Our small retrieval test, which as of writing only included five questions, showed a considerable degradation in performance. Since these were specific examples where we knew the answers, this would be something we'd want to look into to see why the reranking model was not performing as expected. We can also see that regardless of whether reranking was performed or not, the retrieval scores aren't great. This is a good indication that we might want to look into the retrieval model itself (i.e. our embeddings) to see if we can improve its performance. This is what we'll turn to next as we explore finetuning our embeddings to improve retrieval performance. ### Try it out! To see how this works in practice, you can run the evaluation pipeline using the project code. The reranking is included as part of the pipeline, so providing you've run the main `rag` pipeline, you can run the evaluation pipeline to see how the reranking model is performing. To run the evaluation pipeline, first clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` Then navigate to the `llm-complete-guide` directory and follow the instructions in the `README.md` file to run the evaluation pipeline. (You'll have to have first run the main pipeline to generate the embeddings.) To run the evaluation pipeline, you can use the following command: ```bash python run.py --evaluation ``` This will run the evaluation pipeline and output the results to the dashboard. As always, you can inspect the progress, logs, and results in the dashboard!
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md ================ --- description: Learn how to implement reranking in ZenML. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Implementing Reranking in ZenML We already have a working RAG pipeline, so inserting a reranker into the pipeline is relatively straightforward. The reranker will take the retrieved documents from the initial retrieval step and reorder them in terms of the query that was used to retrieve them. ![](/docs/book/.gitbook/assets/reranking-workflow.png) ## How and where to add reranking We'll use the [`rerankers`](https://github.com/AnswerDotAI/rerankers/) package to handle the reranking process in our RAG inference pipeline. It's a relatively low-cost (in terms of technical debt and complexity) and lightweight dependency to add into our pipeline. It offers an interface to most of the model types that are commonly used for reranking and means we don't have to worry about the specifics of each model. This package provides a `Reranker` abstract class that you can use to define your own reranker. You can also use the provided implementations to add reranking to your pipeline. The reranker takes the query and a list of retrieved documents as input and outputs a reordered list of documents based on the reranking scores. Here's a toy example: ```python from rerankers import Reranker ranker = Reranker('cross-encoder') texts = [ "I like to play soccer", "I like to play football", "War and Peace is a great book" "I love dogs", "Ginger cats aren't very smart", "I like to play basketball", ] results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` And results will look something like this: ``` RankedResults( results=[ Result(doc_id=5, text='I like to play basketball', score=-0.46533203125, rank=1), Result(doc_id=0, text='I like to play soccer', score=-0.7353515625, rank=2), Result(doc_id=1, text='I like to play football', score=-0.9677734375, rank=3), Result(doc_id=2, text='War and Peace is a great book', score=-5.40234375, rank=4), Result(doc_id=3, text='I love dogs', score=-5.5859375, rank=5), Result(doc_id=4, text="Ginger cats aren't very smart", score=-5.94921875, rank=6) ], query="What's your favorite sport?", has_scores=True ) ``` We can see that the reranker has reordered the documents based on the reranking scores, with the most relevant document appearing at the top of the list. The texts about sport are at the top and the less relevant ones about animals are down at the bottom. We specified that we want a `cross-encoder` reranker, but you can also use other reranker models from the Hugging Face Hub, use API-driven reranker models (from Jina or Cohere, for example), or even define your own reranker model. Read [their documentation](https://github.com/AnswerDotAI/rerankers/) to see how to use these different configurations. In our case, we can simply add a helper function that can optionally be invoked when we want to use the reranker: ```python def rerank_documents( query: str, documents: List[Tuple], reranker_model: str = "flashrank" ) -> List[Tuple[str, str]]: """Reranks the given documents based on the given query.""" ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) # pair the texts with the original urls in `documents` # `documents` is a tuple of (content, url) # we want the urls to be returned reranked_documents_and_urls = [] for result in results.results: # content is a `rerankers` Result object index_val = result.doc_id doc_text = result.text doc_url = documents[index_val][1] reranked_documents_and_urls.append((doc_text, doc_url)) return reranked_documents_and_urls ``` This function takes a query and a list of documents (each document is a tuple of content and URL) and reranks the documents based on the query. It returns a list of tuples, where each tuple contains the reranked document text and the URL of the original document. We use the `flashrank` model from the `rerankers` package by default as it appeared to be a good choice for our use case during development. This function then gets used in tests in the following way: ```python def query_similar_docs( question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5, ) -> Tuple[str, str, List[str]]: """Query similar documents for a given question and URL ending.""" embedded_question = get_embeddings(question) db_conn = get_db_conn() num_docs = 20 if use_reranking else returned_sample_size # get (content, url) tuples for the top n similar documents top_similar_docs = get_topn_similar_docs( embedded_question, db_conn, n=num_docs, include_metadata=True ) if use_reranking: reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[ :returned_sample_size ] urls = [doc[1] for doc in reranked_docs_and_urls] else: urls = [doc[1] for doc in top_similar_docs] # Unpacking URLs return (question, url_ending, urls) ``` We get the embeddings for the question being passed into the function and connect to our PostgreSQL database. If we're using reranking, we get the top 20 documents similar to our query and rerank them using the `rerank_documents` helper function. We then extract the URLs from the reranked documents and return them. Note that we only return 5 URLs, but in the case of reranking we get a larger number of documents and URLs back from the database to pass to our reranker, but in the end we always choose the top five reranked documents to return. Now that we've added reranking to our pipeline, we can evaluate the performance of our reranker and see how it affects the quality of the retrieved documents. ## Code Example To explore the full code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) repository and for this section, particularly [the `eval_retrieval.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py).
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/reranking/README.md ================ --- description: Add reranking to your RAG inference for better retrieval performance. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} Rerankers are a crucial component of retrieval systems that use LLMs. They help improve the quality of the retrieved documents by reordering them based on additional features or scores. In this section, we'll explore how to add a reranker to your RAG inference pipeline in ZenML. In previous sections, we set up the overall workflow, from data ingestion and preprocessing to embeddings generation and retrieval. We then set up some basic evaluation metrics to assess the performance of our retrieval system. A reranker is a way to squeeze a bit of extra performance out of the system by reordering the retrieved documents based on additional features or scores. ![](/docs/book/.gitbook/assets/reranking-workflow.png) As you can see, reranking is an optional addition we make to what we've already set up. It's not strictly necessary, but it can help improve the relevance and quality of the retrieved documents, which in turn can lead to better responses from the LLM. Let's dive in!
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/reranking/reranking.md ================ --- description: Add reranking to your RAG inference for better retrieval performance. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} Rerankers are a crucial component of retrieval systems that use LLMs. They help improve the quality of the retrieved documents by reordering them based on additional features or scores. In this section, we'll explore how to add a reranker to your RAG inference pipeline in ZenML. In previous sections, we set up the overall workflow, from data ingestion and preprocessing to embeddings generation and retrieval. We then set up some basic evaluation metrics to assess the performance of our retrieval system. A reranker is a way to squeeze a bit of extra performance out of the system by reordering the retrieved documents based on additional features or scores. ![](/docs/book/.gitbook/assets/reranking-workflow.png) As you can see, reranking is an optional addition we make to what we've already set up. It's not strictly necessary, but it can help improve the relevance and quality of the retrieved documents, which in turn can lead to better responses from the LLM. Let's dive in!
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md ================ --- description: Understand how reranking works. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} ## What is reranking? Reranking is the process of refining the initial ranking of documents retrieved by a retrieval system. In the context of Retrieval-Augmented Generation (RAG), reranking plays a crucial role in improving the relevance and quality of the retrieved documents that are used to generate the final output. The initial retrieval step in RAG typically uses a sparse retrieval method, such as BM25 or TF-IDF, to quickly find a set of potentially relevant documents based on the input query. However, these methods rely on lexical matching and may not capture the semantic meaning or context of the query effectively. Rerankers, on the other hand, are designed to reorder the retrieved documents by considering additional features, such as semantic similarity, relevance scores, or domain-specific knowledge. They aim to push the most relevant and informative documents to the top of the list, ensuring that the LLM has access to the best possible context for generating accurate and coherent responses. ## Types of Rerankers There are different types of rerankers that can be used in RAG, each with its own strengths and trade-offs: 1. **Cross-Encoders**: Cross-encoders are a popular choice for reranking in RAG. They take the concatenated query and document as input and output a relevance score. Examples include BERT-based models fine-tuned for passage ranking tasks. Cross-encoders can capture the interaction between the query and document effectively but are computationally expensive. 2. **Bi-Encoders**: Bi-encoders, also known as dual encoders, use separate encoders for the query and document. They generate embeddings for the query and document independently and then compute the similarity between them. Bi-encoders are more efficient than cross-encoders but may not capture the query-document interaction as effectively. 3. **Lightweight Models**: Lightweight rerankers, such as distilled models or small transformer variants, aim to strike a balance between effectiveness and efficiency. They are faster and have a smaller footprint compared to large cross-encoders, making them suitable for real-time applications. ## Benefits of Reranking in RAG Reranking offers several benefits in the context of RAG: 1. **Improved Relevance**: By considering additional features and scores, rerankers can identify the most relevant documents for a given query, ensuring that the LLM has access to the most informative context for generating accurate responses. 2. **Semantic Understanding**: Rerankers can capture the semantic meaning and context of the query and documents, going beyond simple keyword matching. This enables the retrieval of documents that are semantically similar to the query, even if they don't contain exact keyword matches. 3. **Domain Adaptation**: Rerankers can be fine-tuned on domain-specific data to incorporate domain knowledge and improve performance in specific verticals or industries. 4. **Personalization**: Rerankers can be personalized based on user preferences, historical interactions, or user profiles, enabling the retrieval of documents that are more tailored to individual users' needs. In the next section, we'll dive into how to implement reranking in ZenML and integrate it into your RAG inference pipeline.
ZenML Scarf
================ File: docs/book/user-guide/llmops-guide/README.md ================ --- icon: robot description: Leverage the power of LLMs in your MLOps workflows with ZenML. --- # LLMOps guide Welcome to the ZenML LLMOps Guide, where we dive into the exciting world of Large Language Models (LLMs) and how to integrate them seamlessly into your MLOps pipelines using ZenML. This guide is designed for ML practitioners and MLOps engineers looking to harness the potential of LLMs while maintaining the robustness and scalability of their workflows.

ZenML simplifies the development and deployment of LLM-powered MLOps pipelines.

In this guide, we'll explore various aspects of working with LLMs in ZenML, including: * [RAG with ZenML](rag-with-zenml/README.md) * [RAG in 85 lines of code](rag-with-zenml/rag-85-loc.md) * [Understanding Retrieval-Augmented Generation (RAG)](rag-with-zenml/understanding-rag.md) * [Data ingestion and preprocessing](rag-with-zenml/data-ingestion.md) * [Embeddings generation](rag-with-zenml/embeddings-generation.md) * [Storing embeddings in a vector database](rag-with-zenml/storing-embeddings-in-a-vector-database.md) * [Basic RAG inference pipeline](rag-with-zenml/basic-rag-inference-pipeline.md) * [Evaluation and metrics](evaluation/README.md) * [Evaluation in 65 lines of code](evaluation/evaluation-in-65-loc.md) * [Retrieval evaluation](evaluation/retrieval.md) * [Generation evaluation](evaluation/generation.md) * [Evaluation in practice](evaluation/evaluation-in-practice.md) * [Reranking for better retrieval](reranking/README.md) * [Understanding reranking](reranking/understanding-reranking.md) * [Implementing reranking in ZenML](reranking/implementing-reranking.md) * [Evaluating reranking performance](reranking/evaluating-reranking-performance.md) * [Improve retrieval by finetuning embeddings](finetuning-embeddings/finetuning-embeddings.md) * [Synthetic data generation](finetuning-embeddings/synthetic-data-generation.md) * [Finetuning embeddings with Sentence Transformers](finetuning-embeddings/finetuning-embeddings-with-sentence-transformers.md) * [Evaluating finetuned embeddings](finetuning-embeddings/evaluating-finetuned-embeddings.md) * [Finetuning LLMs with ZenML](finetuning-llms/finetuning-llms.md) * [Finetuning in 100 lines of code](finetuning-llms/finetuning-100-loc.md) * [Why and when to finetune LLMs](finetuning-llms/why-and-when-to-finetune-llms.md) * [Starter choices with finetuning](finetuning-llms/starter-choices-for-finetuning-llms.md) * [Finetuning with 🤗 Accelerate](finetuning-llms/finetuning-with-accelerate.md) * [Evaluation for finetuning](finetuning-llms/evaluation-for-finetuning.md) * [Deploying finetuned models](finetuning-llms/deploying-finetuned-models.md) * [Next steps](finetuning-llms/next-steps.md) To follow along with the examples and tutorials in this guide, ensure you have a Python environment set up with ZenML installed. Familiarity with the concepts covered in the [Starter Guide](../starter-guide/README.md) and [Production Guide](../production-guide/README.md) is recommended. We'll showcase a specific application over the course of this LLM guide, showing how you can work from a simple RAG pipeline to a more complex setup that involves finetuning embeddings, reranking retrieved documents, and even finetuning the LLM itself. We'll do this all for a use case relevant to ZenML: a question answering system that can provide answers to common questions about ZenML. This will help you understand how to apply the concepts covered in this guide to your own projects. By the end of this guide, you'll have a solid understanding of how to leverage LLMs in your MLOps workflows using ZenML, enabling you to build powerful, scalable, and maintainable LLM-powered applications. First up, let's take a look at a super simple implementation of the RAG paradigm to get started.
ZenML Scarf
================ File: docs/book/user-guide/production-guide/ci-cd.md ================ --- description: >- Managing the lifecycle of a ZenML pipeline with Continuous Integration and Delivery --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Set up CI/CD Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in production it is often desirable to mediate runs through a central workflow engine baked into your CI. This allows data scientists to experiment with data processing and model training locally and then have code changes automatically tested and validated through the standard pull request/merge request peer review process. Changes that pass the CI and code review are then deployed automatically to production. Here is how this could look like: ![Pipeline being run on staging/production stack through ci/cd](../../.gitbook/assets/ci-cd-overall.png) ## Breaking it down To illustrate this, let's walk through how this process could be set up with a GitHub Repository. Basically we'll be using Github Actions in order to set up a proper CI/CD workflow. {% hint style="info" %} To see this in action, check out the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/). This repository showcases how ZenML can be used for machine learning with a GitHub workflow that automates CI/CD with continuous model training and continuous model deployment to production. The repository is also meant to be used as a template: you can fork it and easily adapt it to your own MLOps stack, infrastructure, code and data.{% endhint %} ### Configure an API Key in ZenML In order to facilitate machine-to-machine connection you need to create an API key within ZenML. Learn more about those [here](../../how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md). ```bash zenml service-account create github_action_api_key ``` This will return the API Key to you like this. This will not be shown to you again, so make sure to copy it here for use in the next section. ```bash Created service account 'github_action_api_key'. Successfully created API key `default`. The API key value is: 'ZENKEY_...' Please store it safely as it will not be shown again. To configure a ZenML client to use this API key, run: ... ``` ### Set up your secrets in Github For our Github Actions we will need to set up some secrets [for our repository](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository). Specifically, you should use github secrets to store the `ZENML_API_KEY` that you created above. ![create_gh_secret.png](../../.gitbook/assets/create_gh_secret.png) The other values that are loaded from secrets into the environment [here](https://github.com/zenml-io/zenml-gitflow/blob/main/.github/workflows/pipeline_run.yaml#L14-L23) can also be set explicitly or as variables. ### (Optional) Set up different stacks for Staging and Production You might not necessarily want to use the same stack with the same resources for your staging and production use. This step is optional, all you'll need for certain is a stack that runs remotely (remote orchestration and artifact storage). The rest is up to you. You might for example want to parametrize your pipeline to use different data sources for the respective environments. You can also use different [configuration files](../../how-to/configuring-zenml/configuring-zenml.md) for the different environments to configure the [Model](../../how-to/model-management-metrics/model-control-plane/README.md), the [DockerSettings](../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md), the [ResourceSettings like accelerators](../../how-to/pipeline-development/training-with-gpus/README.md) differently for the different environments. ### Trigger a pipeline on a Pull Request (Merge Request) One way to ensure only fully working code makes it into production, you should use a staging environment to test all the changes made to your code base and verify they work as intended. To do so automatically you should set up a github action workflow that runs your pipeline for you when you make changes to it. [Here](https://github.com/zenml-io/zenml-gitflow/blob/main/.github/workflows/pipeline_run.yaml) is an example that you can use. To only run the Github Action on a PR, you can configure the yaml like this ```yaml on: pull_request: branches: [ staging, main ] ``` When the workflow starts we want to set some important values. Here is a simplified version that you can use. ```yaml jobs: run-staging-workflow: runs-on: run-zenml-pipeline env: ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} # Put your server url here ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} # Retrieves the api key for use ZENML_STACK: stack_name # Use this to decide which stack is used for staging ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} ``` After configuring these values so they apply to your specific situation the rest of the template should work as is for you. Specifically you will need to install all requirements, connect to your ZenML Server, set an active stack and run a pipeline within your github action. ```yaml steps: - name: Check out repository code uses: actions/checkout@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install requirements run: | pip3 install -r requirements.txt - name: Confirm ZenML client is connected to ZenML server run: | zenml status - name: Set stack run: | zenml stack set ${{ env.ZENML_STACK }} - name: Run pipeline run: | python run.py \ --pipeline end-to-end \ --dataset production \ --version ${{ env.ZENML_GITHUB_SHA }} \ --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} ``` When you push to a branch now, that is within a Pull Request, this action will run automatically. ### (Optional) Comment Metrics onto the PR Finally you can configure your github action workflow to leave a report based on the pipeline that was run. Check out the template for this [here](https://github.com/zenml-io/zenml-gitflow/blob/main/.github/workflows/pipeline_run.yaml#L87-L99. ![Comment left on Pull Request](../../.gitbook/assets/github-action-pr-comment.png)
ZenML Scarf
================ File: docs/book/user-guide/production-guide/cloud-orchestration.md ================ --- description: Orchestrate using cloud resources. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Orchestrate on the cloud Until now, we've only run pipelines locally. The next step is to get free from our local machines and transition our pipelines to execute on the cloud. This will enable you to run your MLOps pipelines in a cloud environment, leveraging the scalability and robustness that cloud platforms offer. In order to do this, we need to get familiar with two more stack components: * The [orchestrator](../../component-guide/orchestrators/orchestrators.md) manages the workflow and execution of your pipelines. * The [container registry](../../component-guide/container-registries/container-registries.md) is a storage and content delivery system that holds your Docker container images. These, along with [remote storage](remote-storage.md), complete a basic cloud stack where our pipeline is entirely running on the cloud. {% hint style="info" %} Would you like to skip ahead and deploy a full ZenML cloud stack already? Check out the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md), the [stack registration wizard](../../how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md), or [the ZenML Terraform modules](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md) for a shortcut on how to deploy & register a cloud stack. {% endhint %} ## Starting with a basic cloud stack The easiest cloud orchestrator to start with is the [Skypilot](https://skypilot.readthedocs.io/) orchestrator running on a public cloud. The advantage of Skypilot is that it simply provisions a VM to execute the pipeline on your cloud provider. Coupled with Skypilot, we need a mechanism to package your code and ship it to the cloud for Skypilot to do its thing. ZenML uses [Docker](https://www.docker.com/) to achieve this. Every time you run a pipeline with a remote orchestrator, [ZenML builds an image](../../how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md) for the entire pipeline (and optionally each step of a pipeline depending on your [configuration](../../how-to/customize-docker-builds/README.md)). This image contains the code, requirements, and everything else needed to run the steps of the pipeline in any environment. ZenML then pushes this image to the container registry configured in your stack, and the orchestrator pulls the image when it's ready to execute a step. To summarize, here is the broad sequence of events that happen when you run a pipeline with such a cloud stack:

Sequence of events that happen when running a pipeline on a full cloud stack.

1. The user runs a pipeline on the client machine. This executes the `run.py` script where ZenML reads the `@pipeline` function and understands what steps need to be executed. 2. The client asks the server for the stack info, which returns it with the configuration of the cloud stack. 3. Based on the stack info and pipeline specification, the client builds and pushes an image to the `container registry`. The image contains the environment needed to execute the pipeline and the code of the steps. 4. The client creates a run in the `orchestrator`. For example, in the case of the [Skypilot](https://skypilot.readthedocs.io/) orchestrator, it creates a virtual machine in the cloud with some commands to pull and run a Docker image from the specified container registry. 5. The `orchestrator` pulls the appropriate image from the `container registry` as it's executing the pipeline (each step has an image). 6. As each pipeline runs, it stores artifacts physically in the `artifact store`. Of course, this artifact store needs to be some form of cloud storage. 7. As each pipeline runs, it reports status back to the ZenML server and optionally queries the server for metadata. ## Provisioning and registering an orchestrator alongside a container registry While there are detailed docs on [how to set up a Skypilot orchestrator](../../component-guide/orchestrators/skypilot-vm.md) and a [container registry](../../component-guide/container-registries/container-registries.md) on each public cloud, we have put the most relevant details here for convenience: {% tabs %} {% tab title="AWS" %} In order to launch a pipeline on AWS with the SkyPilot orchestrator, the first thing that you need to do is to install the AWS and Skypilot integrations: ```shell zenml integration install aws skypilot_aws -y ``` Before we start registering any components, there is another step that we have to execute. As we [explained in the previous section](remote-storage.md#configuring-permissions-with-your-first-service-connector), components such as orchestrators and container registries often require you to set up the right permissions. In ZenML, this process is simplified with the use of [Service Connectors](../../how-to/infrastructure-deployment/auth-management/README.md). For this example, we need to use the [IAM role authentication method of our AWS service connector](../../how-to/infrastructure-deployment/auth-management/aws-service-connector.md#aws-iam-role): ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` Once the service connector is set up, we can register [a Skypilot orchestrator](../../component-guide/orchestrators/skypilot-vm.md): ```shell zenml orchestrator register cloud_orchestrator -f vm_aws zenml orchestrator connect cloud_orchestrator --connector cloud_connector ``` The next step is to register [an AWS container registry](../../component-guide/container-registries/aws.md). Similar to the orchestrator, we will use our connector as we are setting up the container registry: ```shell zenml container-registry register cloud_container_registry -f aws --uri=.dkr.ecr..amazonaws.com zenml container-registry connect cloud_container_registry --connector cloud_connector ``` With the components registered, everything is set up for the next steps. For more information, you can always check the [dedicated Skypilot orchestrator guide](../../component-guide/orchestrators/skypilot-vm.md). {% endtab %} {% tab title="GCP" %} In order to launch a pipeline on GCP with the SkyPilot orchestrator, the first thing that you need to do is to install the GCP and Skypilot integrations: ```shell zenml integration install gcp skypilot_gcp -y ``` Before we start registering any components, there is another step that we have to execute. As we [explained in the previous section](remote-storage.md#configuring-permissions-with-your-first-service-connector), components such as orchestrators and container registries often require you to set up the right permissions. In ZenML, this process is simplified with the use of [Service Connectors](../../how-to/infrastructure-deployment/auth-management/README.md). For this example, we need to use the [Service Account authentication feature of our GCP service connector](../../how-to/infrastructure-deployment/auth-management/gcp-service-connector.md#gcp-service-account): ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` Once the service connector is set up, we can register [a Skypilot orchestrator](../../component-guide/orchestrators/skypilot-vm.md): ```shell zenml orchestrator register cloud_orchestrator -f vm_gcp zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` The next step is to register [a GCP container registry](../../component-guide/container-registries/gcp.md). Similar to the orchestrator, we will use our connector as we are setting up the container registry: ```shell zenml container-registry register cloud_container_registry -f gcp --uri=gcr.io/ zenml container-registry connect cloud_container_registry --connector cloud_connector ``` With the components registered, everything is set up for the next steps. For more information, you can always check the [dedicated Skypilot orchestrator guide](../../component-guide/orchestrators/skypilot-vm.md). {% endtab %} {% tab title="Azure" %} As of [v0.60.0](https://github.com/zenml-io/zenml/releases/tag/0.60.0), alongside the switch to `pydantic` v2, due to an incompatibility between the new version `pydantic` and the `azurecli`, the `skypilot[azure]` flavor can not be installed at the same time. Therefore, for Azure users, an alternative is to use the [Kubernetes Orchestrator](../../component-guide/orchestrators/kubernetes.md). You can easily deploy a Kubernetes cluster in your subscription using the [Azure Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service). In order to launch a pipeline on Azure with the Kubernetes orchestrator, the first thing that you need to do is to install the Azure and Kubernetes integrations: ```shell zenml integration install azure kubernetes -y ``` You should also ensure you have [kubectl installed](https://kubernetes.io/docs/tasks/tools/). Before we start registering any components, there is another step that we have to execute. As we [explained in the previous section](remote-storage.md#configuring-permissions-with-your-first-service-connector), components such as orchestrators and container registries often require you to set up the right permissions. In ZenML, this process is simplified with the use of [Service Connectors](../../how-to/infrastructure-deployment/auth-management/README.md). For this example, we will need to use the [Service Principal authentication feature of our Azure service connector](../../how-to/infrastructure-deployment/auth-management/azure-service-connector.md#azure-service-principal): ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` Once the service connector is set up, we can register [a Kubernetes orchestrator](../../component-guide/orchestrators/kubernetes.md): ```shell # Ensure your service connector has access to the AKS cluster: zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` The next step is to register [an Azure container registry](../../component-guide/container-registries/azure.md). Similar to the orchestrator, we will use our connector as we are setting up the container registry. ```shell zenml container-registry register cloud_container_registry -f azure --uri=.azurecr.io zenml container-registry connect cloud_container_registry --connector cloud_connector ``` With the components registered, everything is set up for the next steps. For more information, you can always check the [dedicated Kubernetes orchestrator guide](../../component-guide/orchestrators/kubernetes.md). {% endtab %} {% endtabs %} {% hint style="info" %} Having trouble with setting up infrastructure? Try reading the [stack deployment](../../how-to/infrastructure-deployment/stack-deployment/README.md) section of the docs to gain more insight. If that still doesn't work, join the [ZenML community](https://zenml.io/slack) and ask! {% endhint %} ## Running a pipeline on a cloud stack Now that we have our orchestrator and container registry registered, we can [register a new stack](understand-stacks.md#registering-a-stack), just like we did in the previous chapter: {% tabs %} {% tab title="CLI" %} ```shell zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry ``` {% endtab %} {% endtabs %} Now, using the [code from the previous chapter](understand-stacks.md#run-a-pipeline-on-the-new-local-stack), we can run a training pipeline. First, set the minimal cloud stack active: ```shell zenml stack set minimal_cloud_stack ``` and then, run the training pipeline: ```shell python run.py --training-pipeline ``` You will notice this time your pipeline behaves differently. After it has built the Docker image with all your code, it will push that image, and run a VM on the cloud. Here is where your pipeline will execute, and the logs will be streamed back to you. So with a few commands, we were able to ship our entire code to the cloud! Curious to see what other stacks you can create? The [Component Guide](../../component-guide/README.md) has an exhaustive list of various artifact stores, container registries, and orchestrators that are integrated with ZenML. Try playing around with more stack components to see how easy it is to switch between MLOps stacks with ZenML.
ZenML Scarf
================ File: docs/book/user-guide/production-guide/configure-pipeline.md ================ --- description: Add more resources to your pipeline configuration. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Configure your pipeline to add compute Now that we have our pipeline up and running in the cloud, you might be wondering how ZenML figured out what sort of dependencies to install in the Docker image that we just ran on the VM. The answer lies in the [runner script we executed (i.e. run.py)](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/run.py#L215), in particular, these lines: ```python pipeline_args["config_path"] = os.path.join( config_folder, "training_rf.yaml" ) # Configure the pipeline training_pipeline_configured = training_pipeline.with_options(**pipeline_args) # Create a run training_pipeline_configured() ``` The above commands [configure our training pipeline](../starter-guide/create-an-ml-pipeline.md#configure-with-a-yaml-file) with a YAML configuration called `training_rf.yaml` (found [here in the source code](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/configs/training\_rf.yaml)). Let's learn more about this configuration file. {% hint style="info" %} The `with_options` command that points to a YAML config is only one way to configure a pipeline. We can also directly configure a pipeline or a step in the decorator: ```python @pipeline(settings=...) ``` However, it is best to not mix configuration from code to ensure separation of concerns in our codebase. {% endhint %} ## Breaking down our configuration YAML The YAML configuration of a ZenML pipeline can be very simple, as in this case. Let's break it down and go through each section one by one: ### The Docker settings ```yaml settings: docker: required_integrations: - sklearn requirements: - pyarrow ``` The first section is the so-called `settings` of the pipeline. This section has a `docker` key, which controls the [containerization process](cloud-orchestration.md#orchestrating-pipelines-on-the-cloud). Here, we are simply telling ZenML that we need `pyarrow` as a pip requirement, and we want to enable the `sklearn` integration of ZenML, which will in turn install the `scikit-learn` library. This Docker section can be populated with many different options, and correspond to the [DockerSettings](https://sdkdocs.zenml.io/latest/core\_code\_docs/core-config/#zenml.config.docker\_settings.DockerSettings) class in the Python SDK. ### Associating a ZenML Model The next section is about associating a [ZenML Model](../starter-guide/track-ml-models.md) with this pipeline. ```yaml # Configuration of the Model Control Plane model: name: breast_cancer_classifier version: rf license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` You will see that this configuration lines up with the model created after executing these pipelines: {% tabs %} {% tab title="CLI" %} ```shell # List all versions of the breast_cancer_classifier zenml model version list breast_cancer_classifier ``` {% endtab %} {% tab title="Dashboard" %} [ZenML Pro](https://www.zenml.io/pro) ships with a Model Control Plane dashboard where you can visualize all the versions:

All model versions listed

{% endtab %} {% endtabs %} ### Passing parameters The last part of the config YAML is the `parameters` key: ```yaml # Configure the pipeline parameters: model_type: "rf" # Choose between rf/sgd ``` This parameters key aligns with the parameters that the pipeline expects. In this case, the pipeline expects a string called `model_type` that will inform it which type of model to use: ```python @pipeline def training_pipeline(model_type: str): ... ``` So you can see that the YAML config is fairly easy to use and is an important part of the codebase to control the execution of our pipeline. You can read more about how to configure a pipeline in the [how to section](../../how-to/pipeline-development/use-configuration-files/what-can-be-configured.md), but for now, we can move on to scaling our pipeline. ## Scaling compute on the cloud When we ran our pipeline with the above config, ZenML used some sane defaults to pick the resource requirements for that pipeline. However, in the real world, you might want to add more memory, CPU, or even a GPU depending on the pipeline at hand. This is as easy as adding the following section to your local `training_rf.yaml` file: ```yaml # These are the resources for the entire pipeline, i.e., each step settings: ... # Adapt this to vm_gcp accordingly orchestrator: memory: 32 # in GB ... steps: model_trainer: settings: orchestrator: cpus: 8 ``` Here we are configuring the entire pipeline with a certain amount of memory, while for the trainer step we are additionally configuring 8 CPU cores. The `orchestrator` key corresponds to the [`SkypilotBaseOrchestratorSettings`](https://sdkdocs.zenml.io/latest/integration\_code\_docs/integrations-skypilot/#zenml.integrations.skypilot.flavors.skypilot\_orchestrator\_base\_vm\_config.SkypilotBaseOrchestratorSettings) class in the Python SDK.
Instructions for Microsoft Azure Users As discussed [before](cloud-orchestration.md), we are using the [Kubernetes orchestrator](../../component-guide/orchestrators/kubernetes.md) for Azure users. In order to scale compute for the Kubernetes orchestrator, the YAML file needs to look like this: ```yaml # These are the resources for the entire pipeline, i.e., each step settings: ... resources: memory: "32GB" ... steps: model_trainer: settings: resources: memory: "8GB" ```
{% hint style="info" %} Read more about settings in ZenML [here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) and [here](../../how-to/pipeline-development/training-with-gpus/README.md) {% endhint %} Now let's run the pipeline again: ```python python run.py --training-pipeline ``` Now you should notice the machine that gets provisioned on your cloud provider would have a different configuration as compared to last time. As easy as that! Bear in mind that not every orchestrator supports `ResourceSettings` directly. To learn more, you can read about [`ResourceSettings` here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md), including the ability to [attach a GPU](../../how-to/pipeline-development/training-with-gpus/README.md#1-specify-a-cuda-enabled-parent-image-in-your-dockersettings).
ZenML Scarf
================ File: docs/book/user-guide/production-guide/connect-code-repository.md ================ --- description: >- Connect a Git repository to ZenML to track code changes and collaborate on MLOps projects. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Configure a code repository Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always wait for a Docker build every time after running a pipeline (even if the local Docker cache is used). However, there is a way to just have one pipeline build and keep reusing it until a change to the pipeline environment is made: by connecting a code repository. With ZenML, connecting to a Git repository optimizes the Docker build processes. It also has the added bonus of being a better way of managing repository changes and enabling better code collaboration. Here is how the flow changes when running a pipeline:

Sequence of events that happen when running a pipeline on a remote stack with a code repository

1. You trigger a pipeline run on your local machine. ZenML parses the `@pipeline` function to determine the necessary steps. 2. The local client requests stack information from the ZenML server, which responds with the cloud stack configuration. 3. The local client detects that we're using a code repository and requests the information from the git repo. 4. Instead of building a new Docker image, the client checks if an existing image can be reused based on the current Git commit hash and other environment metadata. 5. The client initiates a run in the orchestrator, which sets up the execution environment in the cloud, such as a VM. 6. The orchestrator downloads the code directly from the Git repository and uses the existing Docker image to run the pipeline steps. 7. Pipeline steps execute, storing artifacts in the cloud-based artifact store. 8. Throughout the execution, the pipeline run status and metadata are reported back to the ZenML server. By connecting a Git repository, you avoid redundant builds and make your MLOps processes more efficient. Your team can work on the codebase simultaneously, with ZenML handling the version tracking and ensuring that the correct code version is always used for each run. ## Creating a GitHub Repository While ZenML supports [many different flavors of git repositories](../../how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md), this guide will focus on [GitHub](https://github.com). To create a repository on GitHub: 1. Sign in to [GitHub](https://github.com/). 2. Click the "+" icon and select "New repository." 3. Name your repository, set its visibility, and add a README or .gitignore if needed. 4. Click "Create repository." We can now push our local code (from the [previous chapters](understand-stacks.md#run-a-pipeline-on-the-new-local-stack)) to GitHub with these commands: ```sh # Initialize a Git repository git init # Add files to the repository git add . # Commit the files git commit -m "Initial commit" # Add the GitHub remote git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git # Push to GitHub git push -u origin master ``` Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` with your GitHub information. ## Linking to ZenML To connect your GitHub repository to ZenML, you'll need a GitHub Personal Access Token (PAT).
How to get a PAT for GitHub 1. Go to your GitHub account settings and click on [Developer settings](https://github.com/settings/tokens?type=beta). 2. Select "Personal access tokens" and click on "Generate new token". 3. Give your token a name and a description. ![](../../.gitbook/assets/github-fine-grained-token-name.png) 4. We recommend selecting the specific repository and then giving `contents` read-only access. ![](../../.gitbook/assets/github-token-set-permissions.png) ![](../../.gitbook/assets/github-token-permissions-overview.png) 5. Click on "Generate token" and copy the token to a safe place. ![](../../.gitbook/assets/copy-github-fine-grained-token.png)
Now, we can install the GitHub integration and register your repository: ```sh zenml integration install github zenml code-repository register --type=github \ --url=https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git \ --owner=YOUR_USERNAME --repository=YOUR_REPOSITORY_NAME \ --token=YOUR_GITHUB_PERSONAL_ACCESS_TOKEN ``` Fill in ``, `YOUR_USERNAME`, `YOUR_REPOSITORY_NAME`, and `YOUR_GITHUB_PERSONAL_ACCESS_TOKEN` with your details. Your code is now connected to your ZenML server. ZenML will automatically detect if your source files are being tracked by GitHub and store the commit hash for each subsequent pipeline run. You can try this out by running our training pipeline again: ```python # This will build the Docker image the first time python run.py --training-pipeline # This will skip Docker building python run.py --training-pipeline ``` You can read more about [the ZenML Git Integration here](../../how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md).
ZenML Scarf
================ File: docs/book/user-guide/production-guide/deploying-zenml.md ================ --- description: Deploying ZenML is the first step to production. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Deploying ZenML When you first get started with ZenML, it is based on the following architecture on your machine: ![Scenario 1: ZenML default local configuration](../../.gitbook/assets/Scenario1.png) The SQLite database that you can see in this diagram is used to store all the metadata we produced in the previous guide (pipelines, models, artifacts, etc). In order to move into production, you will need to deploy this server somewhere centrally outside of your machine. This allows different infrastructure components to interact with, alongside enabling you to collaborate with your team members: ![Scenario 3: Deployed ZenML Server](../../.gitbook/assets/Scenario3.2.png) ## Choosing how to deploy ZenML While there are many options on how to [deploy ZenML](../../getting-started/deploying-zenml/README.md), the two simplest ones are: ### Option 1: Sign up for a free ZenML Pro Trial [ZenML Pro](https://zenml.io/pro) comes as a managed SaaS solution that offers a one-click deployment for your ZenML server. If you have the ZenML Python client already installed, you can fast-track to connecting to a trial ZenML Pro instance by simply running: ```bash zenml login --pro ``` Alternatively, click [here](https://cloud.zenml.io/?utm\_source=docs\&utm\_medium=referral\_link\&utm\_campaign=cloud\_promotion\&utm\_content=signup\_link) to start a free trial. On top of the one-click SaaS experience, ZenML Pro also comes built-in with additional features and a new dashboard that might be beneficial to follow for this guide. You can always go back to self-hosting after your learning journey is complete. ### Option 2: Self-host ZenML on your cloud provider As ZenML is open source, it is easy to [self-host it](../../getting-started/deploying-zenml/README.md) in a Kubernetes cluster. If you don't have an existing Kubernetes cluster, you can create it manually using the documentation for your cloud provider. For convenience, here are links for [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html), [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli), and [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before\_you\_begin). To learn more about different options for [deploying ZenML, visit the deployment documentation](../../getting-started/deploying-zenml/README.md). ## Connecting to a deployed ZenML You can connect your local ZenML client with the ZenML Server using the ZenML CLI and the web-based login. This can be executed with the command: ```bash zenml login ``` {% hint style="info" %} Having trouble connecting with a browser? There are other ways to connect. Read [here](../../how-to/manage-zenml-server/connecting-to-zenml/README.md) for more details. {% endhint %} This command will start a series of steps to validate the device from where you are connecting that will happen in your browser. After that, you're now locally connected to a remote ZenML. Nothing of your experience changes, except that all metadata that you produce will be tracked centrally in one place from now on. {% hint style="info" %} You can always go back to the local zenml experience by using `zenml logout` {% endhint %} ## Further resources To learn more about deploying ZenML, check out the following resources: - [Deploying ZenML](../../getting-started/deploying-zenml/README.md): an overview of the different options for deploying ZenML and the system architecture of a deployed ZenML instance. - [Full how-to guides](../../getting-started/deploying-zenml/README.md): guides on how to deploy ZenML on Docker or Hugging Face Spaces or Kubernetes or some other cloud provider.
ZenML Scarf
================ File: docs/book/user-guide/production-guide/end-to-end.md ================ --- description: Put your new knowledge in action with an end-to-end project --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # An end-to-end project That was awesome! We learned so many advanced MLOps production concepts: * The value of [deploying ZenML](deploying-zenml.md) * Abstracting infrastructure configuration into [stacks](understand-stacks.md) * [Connecting remote storage](remote-storage.md) * [Orchestrating on the cloud](cloud-orchestration.md) * [Configuring the pipeline to scale compute](configure-pipeline.md) * [Connecting a git repository](connect-code-repository.md) We will now combine all of these concepts into an end-to-end MLOps project powered by ZenML. ## Get started Start with a fresh virtual environment with no dependencies. Then let's install our dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` We will then use [ZenML templates](../../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) to help us get the code we need for the project: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e zenml init --template e2e_batch --template-with-defaults # Just in case, we install the requirements again pip install -r requirements.txt ```
Above doesn't work? Here is an alternative The e2e template is also available as a [ZenML example](https://github.com/zenml-io/zenml/tree/main/examples/e2e). You can clone it: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/e2e pip install -r requirements.txt zenml init ```
## What you'll learn The e2e project is a comprehensive project template to cover major use cases of ZenML: a collection of steps and pipelines and, to top it all off, a simple but useful CLI. It showcases the core ZenML concepts for supervised ML with batch predictions. It builds on top of the [starter project](../starter-guide/starter-project.md) with more advanced concepts. As you progress through the e2e batch template, try running the pipelines on a [remote cloud stack](cloud-orchestration.md) on a tracked [git repository](connect-code-repository.md) to practice some of the concepts we have learned in this guide. At the end, don't forget to share the [ZenML e2e template](https://github.com/zenml-io/template-e2e-batch) with your colleagues and see how they react! ## Conclusion and next steps The production guide has now hopefully landed you with an end-to-end MLOps project, powered by a ZenML server connected to your cloud infrastructure. You are now ready to dive deep into writing your own pipelines and stacks. If you are looking to learn more advanced concepts, the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md) is for you. Until then, we wish you the best of luck chasing your MLOps dreams!
ZenML Scarf
================ File: docs/book/user-guide/production-guide/README.md ================ --- icon: tree description: Level up your skills in a production setting. --- # Production guide The ZenML production guide builds upon the [Starter guide](../starter-guide/README.md) and is the next step in the MLOps Engineer journey with ZenML. If you're an ML practitioner hoping to implement a proof of concept within your workplace to showcase the importance of MLOps, this is the place for you.

ZenML simplifies development of MLOps pipelines that can span multiple production stacks.

This guide will focus on shifting gears from running pipelines _locally_ on your machine, to running them in _production_ in the cloud. We'll cover: * [Deploying ZenML](deploying-zenml.md) * [Understanding stacks](understand-stacks.md) * [Connecting remote storage](remote-storage.md) * [Orchestrating on the cloud](cloud-orchestration.md) * [Configuring the pipeline to scale compute](configure-pipeline.md) * [Configure a code repository](connect-code-repository.md) Like in the starter guide, make sure you have a Python environment ready and `virtualenv` installed to follow along with ease. As now we are dealing with cloud infrastructure, you'll also want to select one of the major cloud providers (AWS, GCP, Azure), and make sure the respective CLIs are installed and authorized. By the end, you will have completed an [end-to-end](end-to-end.md) MLOps project that you can use as inspiration for your own work. Let's get right into it!
ZenML Scarf
================ File: docs/book/user-guide/production-guide/remote-storage.md ================ --- description: Transitioning to remote artifact storage. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Connecting remote storage In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage! Remote storage allows us to store our artifacts in the cloud, which means they're accessible from anywhere and by anyone with the right permissions. This is essential for team collaboration and for managing the larger datasets and models that come with production workloads. When using a stack with remote storage, nothing changes except the fact that the artifacts get materialized in a central and remote storage location. This diagram explains the flow:

Sequence of events that happen when running a pipeline on a remote artifact store.

{% hint style="info" %} Would you like to skip ahead and deploy a full ZenML cloud stack already? Check out the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md), the [stack registration wizard](../../how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md), or [the ZenML Terraform modules](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md) for a shortcut on how to deploy & register a cloud stack. {% endhint %} ## Provisioning and registering a remote artifact store Out of the box, ZenML ships with [many different supported artifact store flavors](../../component-guide/artifact-stores/artifact-stores.md). For convenience, here are some brief instructions on how to quickly get up and running on the major cloud providers: {% tabs %} {% tab title="AWS" %} You will need to install and set up the AWS CLI on your machine as a prerequisite, as covered in [the AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), before you register the S3 Artifact Store. The Amazon Web Services S3 Artifact Store flavor is provided by the [S3 ZenML integration](../../component-guide/artifact-stores/s3.md), you need to install it on your local machine to be able to register an S3 Artifact Store and add it to your stack: ```shell zenml integration install s3 -y ``` {% hint style="info" %} Having trouble with this command? You can use `poetry` or `pip` to install the requirements of any ZenML integration directly. In order to obtain the exact requirements of the AWS S3 integration you can use `zenml integration requirements s3`. {% endhint %} The only configuration parameter mandatory for registering an S3 Artifact Store is the root path URI, which needs to point to an S3 bucket and take the form `s3://bucket-name`. In order to create a S3 bucket, refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). With the URI to your S3 bucket known, registering an S3 Artifact Store can be done as follows: ```shell # Register the S3 artifact-store zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` For more information, read the [dedicated S3 artifact store flavor guide](../../component-guide/artifact-stores/s3.md). {% endtab %} {% tab title="GCP" %} You will need to install and set up the Google Cloud CLI on your machine as a prerequisite, as covered in [the Google Cloud documentation](https://cloud.google.com/sdk/docs/install-sdk) , before you register the GCS Artifact Store. The Google Cloud Storage Artifact Store flavor is provided by the [GCP ZenML integration](../../component-guide/artifact-stores/gcp.md), you need to install it on your local machine to be able to register a GCS Artifact Store and add it to your stack: ```shell zenml integration install gcp -y ``` {% hint style="info" %} Having trouble with this command? You can use `poetry` or `pip` to install the requirements of any ZenML integration directly. In order to obtain the exact requirements of the GCP integrations you can use `zenml integration requirements gcp`. {% endhint %} The only configuration parameter mandatory for registering a GCS Artifact Store is the root path URI, which needs to point to a GCS bucket and take the form `gs://bucket-name`. Please read [the Google Cloud Storage documentation](https://cloud.google.com/storage/docs/creating-buckets) on how to provision a GCS bucket. With the URI to your GCS bucket known, registering a GCS Artifact Store can be done as follows: ```shell # Register the GCS artifact store zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` For more information, read the [dedicated GCS artifact store flavor guide](../../component-guide/artifact-stores/gcp.md). {% endtab %} {% tab title="Azure" %} You will need to install and set up the Azure CLI on your machine as a prerequisite, as covered in [the Azure documentation](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli), before you register the Azure Artifact Store. The Microsoft Azure Artifact Store flavor is provided by the [Azure ZenML integration](../../component-guide/artifact-stores/azure.md), you need to install it on your local machine to be able to register an Azure Artifact Store and add it to your stack: ```shell zenml integration install azure -y ``` {% hint style="info" %} Having trouble with this command? You can use `poetry` or `pip` to install the requirements of any ZenML integration directly. In order to obtain the exact requirements of the Azure integration you can use `zenml integration requirements azure`. {% endhint %} The only configuration parameter mandatory for registering an Azure Artifact Store is the root path URI, which needs to point to an Azure Blog Storage container and take the form `az://container-name` or `abfs://container-name`. Please read [the Azure Blob Storage documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal) on how to provision an Azure Blob Storage container. With the URI to your Azure Blob Storage container known, registering an Azure Artifact Store can be done as follows: ```shell # Register the Azure artifact store zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` For more information, read the [dedicated Azure artifact store flavor guide](../../component-guide/artifact-stores/azure.md). {% endtab %} {% tab title="Other" %} You can create a remote artifact store in pretty much any environment, including other cloud providers using a cloud-agnostic artifact storage such as [Minio](../../component-guide/artifact-stores/artifact-stores.md). It is also relatively simple to create a [custom stack component flavor](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for your use case. {% endtab %} {% endtabs %} {% hint style="info" %} Having trouble with setting up infrastructure? Join the [ZenML community](https://zenml.io/slack) and ask for help! {% endhint %} ## Configuring permissions with your first service connector While you can go ahead and [run your pipeline on your stack](remote-storage.md#running-a-pipeline-on-a-cloud-stack) if your local client is configured to access it, it is best practice to use a [service connector](../../how-to/auth-management/) for this purpose. Service connectors are quite a complicated concept (We have a whole [docs section](../../how-to/auth-management/) on them) - but we're going to be starting with a very basic approach. First, let's understand what a service connector does. In simple words, a service connector contains credentials that grant stack components access to cloud infrastructure. These credentials are stored in the form of a [secret](../../how-to/project-setup-and-management/interact-with-secrets.md), and are available to the ZenML server to use. Using these credentials, the service connector brokers a short-lived token and grants temporary permissions to the stack component to access that infrastructure. This diagram represents this process:

Service Connectors abstract away complexity and implement security best practices

{% tabs %} {% tab title="AWS" %} There are [many ways to create an AWS service connector](../../how-to/infrastructure-deployment/auth-management/aws-service-connector.md#authentication-methods), but for the sake of this guide, we recommend creating one by [using the IAM method](../../how-to/infrastructure-deployment/auth-management/aws-service-connector.md#aws-iam-role). ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` {% endtab %} {% tab title="GCP" %} There are [many ways to create a GCP service connector](../../how-to/infrastructure-deployment/auth-management/gcp-service-connector.md#authentication-methods), but for the sake of this guide, we recommend creating one by [using the Service Account method](../../how-to/infrastructure-deployment/auth-management/gcp-service-connector.md#gcp-service-account). ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` {% endtab %} {% tab title="Azure" %} There are [many ways to create an Azure service connector](../../how-to/infrastructure-deployment/auth-management/azure-service-connector.md#authentication-methods), but for the sake of this guide, we recommend creating one by [using the Service Principal method](../../how-to/infrastructure-deployment/auth-management/azure-service-connector.md#azure-service-principal). ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` {% endtab %} {% endtabs %} Once we have our service connector, we can now attach it to stack components. In this case, we are going to connect it to our remote artifact store: ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` Now, every time you (or anyone else with access) uses the `cloud_artifact_store`, they will be granted a temporary token that will grant them access to the remote storage. Therefore, your colleagues don't need to worry about setting up credentials and installing clients locally! ## Running a pipeline on a cloud stack Now that we have our remote artifact store registered, we can [register a new stack](understand-stacks.md#registering-a-stack) with it, just like we did in the previous chapter: {% tabs %} {% tab title="CLI" %} ```shell zenml stack register local_with_remote_storage -o default -a cloud_artifact_store ``` {% endtab %} {% tab title="Dashboard" %}

Register a new stack.

{% endtab %} {% endtabs %} Now, using the [code from the previous chapter](understand-stacks.md#run-a-pipeline-on-the-new-local-stack), we run a training pipeline: Set our `local_with_remote_storage` stack active: ```shell zenml stack set local_with_remote_storage ``` Let us continue with the example from the previous page and run the training pipeline: ```shell python run.py --training-pipeline ``` When you run that pipeline, ZenML will automatically store the artifacts in the specified remote storage, ensuring that they are preserved and accessible for future runs and by your team members. You can ask your colleagues to connect to the same [ZenML server](deploying-zenml.md), and you will notice that if they run the same pipeline, the pipeline would be partially cached, **even if they have not run the pipeline themselves before**. You can list your artifact versions as follows: {% tabs %} {% tab title="CLI" %} ```shell # This will give you the artifacts from the last 15 minutes zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` {% endtab %} {% tab title="Cloud Dashboard" %} [ZenML Pro](https://zenml.io/pro) features an [Artifact Control Plane](../starter-guide/manage-artifacts.md) to visualize artifact versions:

See artifact versions in the cloud.

{% endtab %} {% endtabs %} You will notice above that some artifacts are stored locally, while others are stored in a remote storage location. By connecting remote storage, you're taking a significant step towards building a collaborative and scalable MLOps workflow. Your artifacts are no longer tied to a single machine but are now part of a cloud-based ecosystem, ready to be shared and built upon.
ZenML Scarf
================ File: docs/book/user-guide/production-guide/understand-stacks.md ================ --- description: Learning how to switch the infrastructure backend of your code. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Understanding stacks Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running [your first pipelines](../starter-guide/create-an-ml-pipeline.md), you might have already noticed the term `stack` in the logs and on the dashboard. A `stack` is the configuration of tools and infrastructure that your pipelines can run on. When you run ZenML code without configuring a stack, the pipeline will run on the so-called `default` stack.

ZenML is the translation layer that allows your code to run on any of your stacks

### Separation of code from configuration and infrastructure As visualized in the diagram above, there are two separate domains that are connected through ZenML. The left side shows the code domain. The user's Python code is translated into a ZenML pipeline. On the right side, you can see the infrastructure domain, in this case, an instance of the `default` stack. By separating these two domains, it is easy to switch the environment that the pipeline runs on without making any changes in the code. It also allows domain experts to write code/configure infrastructure without worrying about the other domain. {% hint style="info" %} You can get the `pip` requirements of your stack by running the `zenml stack export-requirements ` CLI command. {% endhint %} ### The `default` stack `zenml stack describe` lets you find out details about your active stack: ```bash ... Stack Configuration ┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ COMPONENT_TYPE │ COMPONENT_NAME ┃ ┠────────────────┼────────────────┨ ┃ ARTIFACT_STORE │ default ┃ ┠────────────────┼────────────────┨ ┃ ORCHESTRATOR │ default ┃ ┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ 'default' stack (ACTIVE) Stack 'default' with id '...' is owned by user default and is 'private'. ... ``` `zenml stack list` lets you see all stacks that are registered in your zenml deployment. ```bash ... ┏━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━┓ ┃ ACTIVE │ STACK NAME │ STACK ID │ SHARED │ OWNER │ ARTIFACT_STORE │ ORCHESTRATOR ┃ ┠────────┼────────────┼───────────┼────────┼─────────┼────────────────┼──────────────┨ ┃ 👉 │ default │ ... │ ➖ │ default │ default │ default ┃ ┗━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━┛ ... ``` {% hint style="info" %} As you can see a stack can be **active** on your **client**. This simply means that any pipeline you run will be using the **active stack** as its environment. {% endhint %} ## Components of a stack As you can see in the section above, a stack consists of multiple components. All stacks have at minimum an **orchestrator** and an **artifact store**. ### Orchestrator The **orchestrator** is responsible for executing the pipeline code. In the simplest case, this will be a simple Python thread on your machine. Let's explore this default orchestrator. `zenml orchestrator list` lets you see all orchestrators that are registered in your zenml deployment. ```bash ┏━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┓ ┃ ACTIVE │ NAME │ COMPONENT ID │ FLAVOR │ SHARED │ OWNER ┃ ┠────────┼─────────┼──────────────┼────────┼────────┼─────────┨ ┃ 👉 │ default │ ... │ local │ ➖ │ default ┃ ┗━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┛ ``` ### Artifact store The **artifact store** is responsible for persisting the step outputs. As we learned in the previous section, the step outputs are not passed along in memory, rather the outputs of each step are stored in the **artifact store** and then loaded from there when the next step needs them. By default this will also be on your own machine: `zenml artifact-store list` lets you see all artifact stores that are registered in your zenml deployment. ```bash ┏━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┓ ┃ ACTIVE │ NAME │ COMPONENT ID │ FLAVOR │ SHARED │ OWNER ┃ ┠────────┼─────────┼──────────────┼────────┼────────┼─────────┨ ┃ 👉 │ default │ ... │ local │ ➖ │ default ┃ ┗━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┛ ``` ### Other stack components There are many more components that you can add to your stacks, like experiment trackers, model deployers, and more. You can see all supported stack component types in a single table view [here](../../component-guide/README.md) Perhaps the most important stack component after the orchestrator and the artifact store is the [container registry](../../component-guide/container-registries/container-registries.md). A container registry stores all your containerized images, which hold all your code and the environment needed to execute them. We will learn more about them in the next section! ## Registering a stack Just to illustrate how to interact with stacks, let's create an alternate local stack. We start by first creating a local artifact store. ### Create an artifact store ```bash zenml artifact-store register my_artifact_store --flavor=local ``` Let's understand the individual parts of this command: * `artifact-store` : This describes the top-level group, to find other stack components simply run `zenml --help` * `register` : Here we want to register a new component, instead, we could also `update` , `delete` and more `zenml artifact-store --help` will give you all possibilities * `my_artifact_store` : This is the unique name that the stack component will have. * `--flavor=local`: A flavor is a possible implementation for a stack component. So in the case of an artifact store, this could be an s3-bucket or a local filesystem. You can find out all possibilities with `zenml artifact-store flavor --list` This will be the output that you can expect from the command above. ```bash Using the default local database. Running with active stack: 'default' (global) Successfully registered artifact_store `my_artifact_store`.bash ``` To see the new artifact store that you just registered, just run: ```bash zenml artifact-store describe my_artifact_store ``` ### Create a local stack With the artifact store created, we can now create a new stack with this artifact store. ```bash zenml stack register a_new_local_stack -o default -a my_artifact_store ``` * `stack` : This is the CLI group that enables interactions with the stacks * `register`: Here we want to register a new stack. Explore other operations with`zenml stack --help`. * `a_new_local_stack` : This is the unique name that the stack will have. * `--orchestrator` or `-o` are used to specify which orchestrator to use for the stack * `--artifact-store` or `-a` are used to specify which artifact store to use for the stack The output for the command should look something like this: ```bash Using the default local database. Stack 'a_new_local_stack' successfully registered! ``` You can inspect the stack with the following command: ```bash zenml stack describe a_new_local_stack ``` Which will give you an output like this: ```bash Stack Configuration ┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┓ ┃ COMPONENT_TYPE │ COMPONENT_NAME ┃ ┠────────────────┼───────────────────┨ ┃ ORCHESTRATOR │ default ┃ ┠────────────────┼───────────────────┨ ┃ ARTIFACT_STORE │ my_artifact_store ┃ ┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┛ 'a_new_local_stack' stack Stack 'a_new_local_stack' with id '...' is owned by user default and is 'private'. ``` ### Switch stacks with our VS Code extension ![GIF of our VS code extension, showing some of the uses of the sidebar](../../.gitbook/assets/zenml-extension-shortened.gif) If you are using [our VS Code extension](https://marketplace.visualstudio.com/items?itemName=ZenML.zenml-vscode), you can easily view and switch your stacks by opening the sidebar (click on the ZenML icon). You can then click on the stack you want to switch to as well as view the stack components it's made up of. ### Run a pipeline on the new local stack Let's use the pipeline in our starter project from the [previous guide](../starter-guide/starter-project.md) to see it in action. If you have not already, clone the starter template: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults # Just in case, we install the requirements again pip install -r requirements.txt ```
Above doesn't work? Here is an alternative The starter template is the same as the [ZenML mlops starter example](https://github.com/zenml-io/zenml/tree/main/examples/mlops_starter). You can clone it like so: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ```
To run a pipeline using the new stack: 1. Set the stack as active on your client ```bash zenml stack set a_new_local_stack ``` 2. Run your pipeline code: ```bash python run.py --training-pipeline ``` Keep this code handy as we'll be using it in the next chapters!
ZenML Scarf
================ File: docs/book/user-guide/starter-guide/cache-previous-executions.md ================ --- description: Iterating quickly with ZenML through caching. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Cache previous executions Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching. In the logs of your previous runs, you might have noticed at this point that rerunning the pipeline a second time will use caching on the first step: ```bash Step training_data_loader has started. Using cached version of training_data_loader. Step svc_trainer has started. Train accuracy: 0.3416666666666667 Step svc_trainer has finished in 0.932s. ``` ![DAG of a cached pipeline run](../../.gitbook/assets/CachedDag.png) ZenML understands that nothing has changed between subsequent runs, so it re-uses the output of the previous run (the outputs are persisted in the [artifact store](../../component-guide/artifact-stores/artifact-stores.md)). This behavior is known as **caching**. In ZenML, caching is enabled by default. Since ZenML automatically tracks and versions all inputs, outputs, and parameters of steps and pipelines, steps will not be re-executed within the **same pipeline** on subsequent pipeline runs as long as there is **no change** in the inputs, parameters, or code of a step. If you run a pipeline without a schedule, ZenML will be able to compute the cached steps on your client machine. This means that these steps don't have to be executed by your [orchestrator](../../component-guide/orchestrators/orchestrators.md), which can save time and money when you're executing your pipelines remotely. If you always want your orchestrator to compute cached steps dynamically, you can set the `ZENML_PREVENT_CLIENT_SIDE_CACHING` environment variable to `True`. {% hint style="warning" %} The caching does not automatically detect changes within the file system or on external APIs. Make sure to **manually** set caching to `False` on steps that depend on **external inputs, file-system changes,** or if the step should run regardless of caching. ```python @step(enable_cache=False) def load_data_from_external_system(...) -> ...: # This step will always be run ``` {% endhint %} ## Configuring the caching behavior of your pipelines With caching as the default behavior, there will be times when you need to disable it. There are levels at which you can take control of when and where caching is used. ```mermaid graph LR A["Pipeline Settings"] -->|overwritten by| B["Step Settings"] B["Step Settings"] -->|overwritten by| C["Changes in Code, Inputs or Parameters"] ``` ### Caching at the pipeline level On a pipeline level, the caching policy can be set as a parameter within the `@pipeline` decorator as shown below: ```python @pipeline(enable_cache=False) def first_pipeline(....): """Pipeline with cache disabled""" ``` The setting above will disable caching for all steps in the pipeline unless a step explicitly sets `enable_cache=True` ( see below). {% hint style="info" %} When writing your pipelines, be explicit. This makes it clear when looking at the code if caching is enabled or disabled for any given pipeline. {% endhint %} #### Dynamically configuring caching for a pipeline run Sometimes you want to have control over caching at runtime instead of defaulting to the hard-coded pipeline and step decorator settings. ZenML offers a way to override all caching settings at runtime: ```python first_pipeline = first_pipeline.with_options(enable_cache=False) ``` The code above disables caching for all steps of your pipeline, no matter what you have configured in the `@step` or `@pipeline` decorators. The `with_options` function allows you to configure all sorts of things this way. We will learn more about it in the [coming chapters](../production-guide/configure-pipeline.md)! ### Caching at a step-level Caching can also be explicitly configured at a step level via a parameter of the `@step` decorator: ```python @step(enable_cache=False) def import_data_from_api(...): """Import most up-to-date data from public api""" ... ``` The code above turns caching off for this step only. You can also use `with_options` with the step, just as in the pipeline: ```python import_data_from_api = import_data_from_api.with_options(enable_cache=False) # use in your pipeline directly ``` ## Code Example This section combines all the code from this section into one simple script that you can use to see caching easily:
Code Example of this Section ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as tuple of Pandas DataFrame / Series.""" iris = load_iris(as_frame=True) X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 ) return X_train, X_test, y_train, y_test @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: """Train a sklearn SVC classifier and log to MLflow.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() # Step one will use cache, step two will rerun. # ZenML will detect a different value for the # `gamma` input of the second step and disable caching. logger.info("\n\nFirst step cached, second not due to parameter change") training_pipeline(gamma=0.0001) # This will disable cache for the second step. logger.info("\n\nFirst step cached, second not due to settings") svc_trainer = svc_trainer.with_options(enable_cache=False) training_pipeline() # This will disable cache for all steps. logger.info("\n\nCaching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ```
ZenML Scarf
================ File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md ================ --- description: Start with the basics of steps and pipelines. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Create an ML pipeline In the quest for production-ready ML models, workflows can quickly become complex. Decoupling and standardizing stages such as data ingestion, preprocessing, and model evaluation allows for more manageable, reusable, and scalable processes. ZenML pipelines facilitate this by enabling each stage—represented as **Steps**—to be modularly developed and then integrated smoothly into an end-to-end **Pipeline**. Leveraging ZenML, you can create and manage robust, scalable machine learning (ML) pipelines. Whether for data preparation, model training, or deploying predictions, ZenML standardizes and streamlines the process, ensuring reproducibility and efficiency.

ZenML pipelines are simple Python code

{% hint style="info" %} Before starting this guide, make sure you have [installed ZenML](../../getting-started/installation.md): ```shell pip install "zenml[server]" zenml login --local # Will launch the dashboard locally ``` {% endhint %} ## Start with a simple ML pipeline Let's jump into an example that demonstrates how a simple pipeline can be set up in ZenML, featuring actual ML components to give you a better sense of its application. ```python from zenml import pipeline, step @step def load_data() -> dict: """Simulates loading of training data and labels.""" training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: dict) -> None: """ A mock 'training' process that also demonstrates using the input data. In a real-world scenario, this would be replaced with actual model fitting logic. """ total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): """Define a pipeline that connects the steps.""" dataset = load_data() train_model(dataset) if __name__ == "__main__": run = simple_ml_pipeline() # You can now use the `run` object to see steps, outputs, etc. ``` {% hint style="info" %} * **`@step`** is a decorator that converts its function into a step that can be used within a pipeline * **`@pipeline`** defines a function as a pipeline and within this function, the steps are called and their outputs link them together. {% endhint %} Copy this code into a new file and name it `run.py`. Then run it with your command line: {% code overflow="wrap" %} ```bash $ python run.py Initiating a new run for the pipeline: simple_ml_pipeline. Executing a new run. Using user: hamza@zenml.io Using stack: default orchestrator: default artifact_store: default Step load_data has started. Step load_data has finished in 0.385s. Step train_model has started. Trained model using 3 data points. Feature sum is 21, label sum is 1 Step train_model has finished in 0.265s. Run simple_ml_pipeline-2023_11_23-10_51_59_657489 has finished in 1.612s. Pipeline visualization can be seen in the ZenML Dashboard. Run zenml login --local to see your pipeline! ``` {% endcode %} ### Explore the dashboard Once the pipeline has finished its execution, use the `zenml login --local` command to view the results in the ZenML Dashboard. Using that command will open up the browser automatically.

Landing Page of the Dashboard

Usually, the dashboard is accessible at [http://127.0.0.1:8237/](http://127.0.0.1:8237/). Log in with the default username **"default"** (password not required) and see your recently run pipeline. Browse through the pipeline components, such as the execution history and artifacts produced by your steps. Use the DAG visualization to understand the flow of data and to ensure all steps are completed successfully.

Diagram view of the run, with the runtime attributes of step 2.

For further insights, explore the logging and artifact information associated with each step, which can reveal details about the data and intermediate results. If you have closed the browser tab with the ZenML dashboard, you can always reopen it by running `zenml show` in your terminal. ## Understanding steps and artifacts When you ran the pipeline, each individual function that ran is shown in the DAG visualization as a `step` and is marked with the function name. Steps are connected with `artifacts`, which are simply the objects that are returned by these functions and input into downstream functions. This simple logic lets us break down our entire machine learning code into a sequence of tasks that pass data between each other. The artifacts produced by your steps are automatically stored and versioned by ZenML. The code that produced these artifacts is also automatically tracked. The parameters and all other configuration is also automatically captured. So you can see, by simply structuring your code within some functions and adding some decorators, we are one step closer to having a more tracked and reproducible codebase! ## Expanding to a Full Machine Learning Workflow With the fundamentals in hand, let’s escalate our simple pipeline to a complete ML workflow. For this task, we will use the well-known Iris dataset to train a Support Vector Classifier (SVC). Let's start with the imports. ```python from typing_extensions import Annotated # or `from typing import Annotated on Python 3.9+ from typing import Tuple import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step ``` Make sure to install the requirements as well: ```bash pip install matplotlib zenml integration install sklearn -y ``` In this case, ZenML has an integration with `sklearn` so you can use the ZenML CLI to install the right version directly. {% hint style="info" %} The `zenml integration install sklearn` command is simply doing a `pip install` of `sklearn` behind the scenes. If something goes wrong, one can always use `zenml integration requirements sklearn` to see which requirements are compatible and install using pip (or any other tool) directly. (If no specific requirements are mentioned for an integration then this means we support using all possible versions of that integration/package.) {% endhint %} ### Define a data loader with multiple outputs A typical start of an ML pipeline is usually loading data from some source. This step will sometimes have multiple outputs. To define such a step, use a `Tuple` type annotation. Additionally, you can use the `Annotated` annotation to assign [custom output names](manage-artifacts.md#giving-names-to-your-artifacts). Here we load an open-source dataset and split it into a train and a test dataset. ```python import logging @step def training_data_loader() -> Tuple[ # Notice we use a Tuple and Annotated to return # multiple named outputs Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as a tuple of Pandas DataFrame / Series.""" logging.info("Loading iris...") iris = load_iris(as_frame=True) logging.info("Splitting train and test...") X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 ) return X_train, X_test, y_train, y_test ``` {% hint style="info" %} ZenML records the root python logging handler's output into the artifact store as a side-effect of running a step. Therefore, when writing steps, use the `logging` module to record logs, to ensure that these logs then show up in the ZenML dashboard. {% endhint %} ### Create a parameterized training step Here we are creating a training step for a support vector machine classifier with `sklearn`. As we might want to adjust the hyperparameter `gamma` later on, we define it as an input value to the step as well. ```python @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc ``` {% hint style="info" %} If you want to run just a single step on your ZenML stack, all you need to do is call the step function outside of a ZenML pipeline. For example: ```python model, train_acc = svc_trainer(X_train=..., y_train=...) ``` {% endhint %} Next, we will combine our two steps into a pipeline and run it. As you can see, the parameter gamma is configurable as a pipeline input as well. ```python @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline(gamma=0.0015) ``` {% hint style="info" %} Best Practice: Always nest the actual execution of the pipeline inside an `if __name__ == "__main__"` condition. This ensures that loading the pipeline from elsewhere does not also run it. ```python if __name__ == "__main__": training_pipeline() ``` {% endhint %} Running `python run.py` should look somewhat like this in the terminal:
Registered new pipeline with name `training_pipeline`.
.
.
.
Pipeline run `training_pipeline-2023_04_29-09_19_54_273710` has finished in 0.236s.
In the dashboard, you should now be able to see this new run, along with its runtime configuration and a visualization of the training data.

Run created by the code in this section along with a visualization of the ground-truth distribution.

### Configure with a YAML file Instead of configuring your pipeline runs in code, you can also do so from a YAML file. This is best when we do not want to make unnecessary changes to the code; in production this is usually the case. To do this, simply reference the file like this: ```python # Configure the pipeline training_pipeline = training_pipeline.with_options( config_path='/local/path/to/config.yaml' ) # Run the pipeline training_pipeline() ``` The reference to a local file will change depending on where you are executing the pipeline and code from, so please bear this in mind. It is best practice to put all config files in a configs directory at the root of your repository and check them into git history. A simple version of such a YAML file could be: ```yaml parameters: gamma: 0.01 ``` Please note that this would take precedence over any parameters passed in the code. If you are unsure how to format this config file, you can generate a template config file from a pipeline. ```python training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml') ``` Check out [this section](../../how-to/pipeline-development/use-configuration-files/README.md) for advanced configuration options. ## Full Code Example This section combines all the code from this section into one simple script that you can use to run easily:
Code Example of this Section ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as tuple of Pandas DataFrame / Series.""" iris = load_iris(as_frame=True) X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 ) return X_train, X_test, y_train, y_test @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: """Train a sklearn SVC classifier and log to MLflow.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ```
ZenML Scarf
================ File: docs/book/user-guide/starter-guide/manage-artifacts.md ================ --- description: Understand and adjust how ZenML versions your data. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Manage artifacts Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifact—be it data, models, or evaluations—is automatically tracked and versioned upon pipeline execution. ![Walkthrough of ZenML Artifact Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/dcp\_walkthrough.gif) This guide will delve into artifact versioning and management, showing you how to efficiently name, organize, and utilize your data with the ZenML framework. ## Managing artifacts produced by ZenML pipelines Artifacts, the outputs of your steps and pipelines, are automatically versioned and stored in the artifact store. Configuring these artifacts is pivotal for transparent and efficient pipeline development. ### Giving names to your artifacts Assigning custom names to your artifacts can greatly enhance their discoverability and manageability. As best practice, utilize the `Annotated` object within your steps to give precise, human-readable names to outputs: ```python from typing_extensions import Annotated import pandas as pd from sklearn.datasets import load_iris from zenml import pipeline, step # Using Annotated to name our dataset @step def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: """Load the iris dataset as pandas dataframe.""" iris = load_iris(as_frame=True) return iris.get("frame") @pipeline def feature_engineering_pipeline(): training_data_loader() if __name__ == "__main__": feature_engineering_pipeline() ``` {% hint style="info" %} Unspecified artifact outputs default to a naming pattern of `{pipeline_name}::{step_name}::output`. For visual exploration in the ZenML dashboard, it's best practice to give significant outputs clear custom names. {% endhint %} Artifacts named `iris_dataset` can then be found swiftly using various ZenML interfaces: {% tabs %} {% tab title="OSS (CLI)" %} To list artifacts: `zenml artifact list` {% endtab %} {% tab title="Cloud (Dashboard)" %} The [ZenML Pro](https://zenml.io/pro) dashboard offers advanced visualization features for artifact exploration.

ZenML Artifact Control Plane.

{% hint style="info" %} To prevent visual clutter, make sure to assign names to your most important artifacts that you would like to explore visually. {% endhint %} {% endtab %} {% endtabs %} ### Versioning artifacts manually ZenML automatically versions all created artifacts using auto-incremented numbering. I.e., if you have defined a step creating an artifact named `iris_dataset` as shown above, the first execution of the step will create an artifact with this name and version "1", the second execution will create version "2", and so on. While ZenML handles artifact versioning automatically, you have the option to specify custom versions using the [`ArtifactConfig`](https://sdkdocs.zenml.io/latest/core\_code\_docs/core-model/#zenml.model.artifact\_config.DataArtifactConfig). This may come into play during critical runs like production releases. ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> ( Annotated[ pd.DataFrame, # Add `ArtifactConfig` to control more properties of your artifact ArtifactConfig( name="iris_dataset", version="raw_2023" ), ] ): ... ``` The next execution of this step will then create an artifact with the name `iris_dataset` and version `raw_2023`. This is primarily useful if you are making a particularly important pipeline run (such as a release) whose artifacts you want to distinguish at a glance later. {% hint style="warning" %} Since custom versions cannot be duplicated, the above step can only be run once successfully. To avoid altering your code frequently, consider using a [YAML config](../production-guide/configure-pipeline.md) for artifact versioning. {% endhint %} After execution, `iris_dataset` and its version `raw_2023` can be seen using: {% tabs %} {% tab title="OSS (CLI)" %} To list versions: `zenml artifact version list` {% endtab %} {% tab title="Cloud (Dashboard)" %} The Cloud dashboard visualizes version history for your review.

ZenML Data Versions List.

{% endtab %} {% endtabs %} ### Add metadata and tags to artifacts If you would like to extend your artifacts with extra metadata or tags you can do so by following the patterns demonstrated below: ```python from zenml import step, get_step_context, ArtifactConfig from typing_extensions import Annotated # below we annotate output with `ArtifactConfig` giving it a name, # run_metadata and tags. As a result, the created artifact # `artifact_name` will get configured with metadata and tags @step def annotation_approach() -> ( Annotated[ str, ArtifactConfig( name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"], ), ] ): return "string" # below we annotate output using functional approach with # run_metadata and tags. As a result, the created artifact # `artifact_name` will get configured with metadata and tags @step def annotation_approach() -> Annotated[str, "artifact_name"]: step_context = get_step_context() step_context.add_output_metadata( output_name="artifact_name", metadata={"metadata_key": "metadata_value"} ) step_context.add_output_tags(output_name="artifact_name", tags=["tag_name"]) return "string" # below we combine both approaches, so the artifact will get # metadata and tags from both sources @step def annotation_approach() -> ( Annotated[ str, ArtifactConfig( name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"], ), ] ): step_context = get_step_context() step_context.add_output_metadata( output_name="artifact_name", metadata={"metadata_key2": "metadata_value2"} ) step_context.add_output_tags(output_name="artifact_name", tags=["tag_name2"]) return "string" ``` ## Comparing metadata across runs (Pro) The [ZenML Pro](https://www.zenml.io/pro) dashboard includes an Experiment Comparison tool that allows you to visualize and analyze metadata across different pipeline runs. This feature helps you understand patterns and changes in your pipeline's behavior over time. ### Using the comparison views The tool offers two complementary views for analyzing your metadata: #### Table View The tabular view provides a structured comparison of metadata across runs: ![Comparing metadata values across different pipeline runs in table view.](../../../book/.gitbook/assets/table-view.png) This view automatically calculates changes between runs and allows you to: * Sort and filter metadata values * Track changes over time * Compare up to 20 runs simultaneously #### Parallel Coordinates View The parallel coordinates visualization helps identify relationships between different metadata parameters: ![Comparing metadata values across different pipeline runs in parallel coordinates view.](../../../book/.gitbook/assets/coordinates-view.png) This view is particularly useful for: * Discovering correlations between different metrics * Identifying patterns across pipeline runs * Filtering and focusing on specific parameter ranges ### Accessing the comparison tool To compare metadata across runs: 1. Navigate to any pipeline in your dashboard 2. Click the "Compare" button in the top navigation 3. Select the runs you want to compare 4. Switch between table and parallel coordinates views using the tabs {% hint style="info" %} The comparison tool works with any numerical metadata (`float` or `int`) that you've logged in your pipelines. Make sure to log meaningful metrics in your steps to make the most of this feature. {% endhint %} ### Sharing comparisons The tool preserves your comparison configuration in the URL, making it easy to share specific views with team members. Simply copy and share the URL to allow others to see the same comparison with identical settings and filters. {% hint style="warning" %} This feature is currently in Alpha Preview. We encourage you to share feedback about your use cases and requirements through our Slack community. {% endhint %} ## Specify a type for your artifacts Assigning a type to an artifact allows ZenML to highlight them differently in the dashboard and also lets you filter your artifacts better. {% hint style="info" %} If you don't specify a type for your artifact, ZenML will use the default artifact type provided by the materializer that is used to save the artifact. {% endhint %} ```python from typing_extensions import Annotated from zenml import ArtifactConfig, save_artifact, step from zenml.enums import ArtifactType # Assign an artifact type to a step output @step def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: return MyCustomModel(...) # Assign an artifact type when manually saving artifacts model = ... save_artifact(model, name="model", artifact_type=ArtifactType.MODEL) ``` ## Consuming external artifacts within a pipeline While most pipelines start with a step that produces an artifact, it is often the case to want to consume artifacts external from the pipeline. The `ExternalArtifact` class can be used to initialize an artifact within ZenML with any arbitrary data type. For example, let's say we have a Snowflake query that produces a dataframe, or a CSV file that we need to read. External artifacts can be used for this, to pass values to steps that are neither JSON serializable nor produced by an upstream step: ```python import numpy as np from zenml import ExternalArtifact, pipeline, step @step def print_data(data: np.ndarray): print(data) @pipeline def printing_pipeline(): # One can also pass data directly into the ExternalArtifact # to create a new artifact on the fly data = ExternalArtifact(value=np.array([0])) print_data(data=data) if __name__ == "__main__": printing_pipeline() ``` Optionally, you can configure the `ExternalArtifact` to use a custom [materializer](../../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) for your data or disable artifact metadata and visualizations. Check out the [SDK docs](https://sdkdocs.zenml.io/latest/core\_code\_docs/core-artifacts/#zenml.artifacts.external\_artifact.ExternalArtifact) for all available options. {% hint style="info" %} Using an `ExternalArtifact` for your step automatically disables caching for the step. {% endhint %} ## Consuming artifacts produced by other pipelines It is also common to consume an artifact downstream after producing it in an upstream pipeline or step. As we have learned in the [previous section](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md#fetching-artifacts-directly), the `Client` can be used to fetch artifacts directly inside the pipeline code: ```python from uuid import UUID import pandas as pd from zenml import step, pipeline from zenml.client import Client @step def trainer(dataset: pd.DataFrame): ... @pipeline def training_pipeline(): client = Client() # Fetch by ID dataset_artifact = client.get_artifact_version( name_id_or_prefix=UUID("3a92ae32-a764-4420-98ba-07da8f742b76") ) # Fetch by name alone - uses the latest version of this artifact dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") # Fetch by name and version dataset_artifact = client.get_artifact_version( name_id_or_prefix="iris_dataset", version="raw_2023" ) # Pass into any step trainer(dataset=dataset_artifact) if __name__ == "__main__": training_pipeline() ``` {% hint style="info" %} Calls of `Client` methods like `get_artifact_version` directly inside the pipeline code makes use of ZenML's [late materialization](../../how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md) behind the scenes. {% endhint %} If you would like to bypass materialization entirely and just download the data or files associated with a particular artifact version, you can use the `.download_files` method: ```python from zenml.client import Client client = Client() artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") artifact.download_files("path/to/save.zip") ``` Take note that the path must have the `.zip` extension, as the artifact data will be saved as a zip file. Make sure to handle any exceptions that may arise from this operation. ## Managing artifacts **not** produced by ZenML pipelines Sometimes, artifacts can be produced completely outside of ZenML. A good example of this is the predictions produced by a deployed model. ```python # A model is deployed, running in a FastAPI container # Let's use the ZenML client to fetch the latest model and make predictions from zenml.client import Client from zenml import save_artifact # Fetch the model from a registry or a previous pipeline model = ... # Let's make a prediction prediction = model.predict([[1, 1, 1, 1]]) # We now store this prediction in ZenML as an artifact # This will create a new artifact version save_artifact(prediction, name="iris_predictions") ``` You can also load any artifact stored within ZenML using the `load_artifact` method: ```python # Loads the latest version load_artifact("iris_predictions") ``` {% hint style="info" %} `load_artifact` is simply short-hand for the following Client call: ```python from zenml.client import Client client = Client() client.get_artifact("iris_predictions").load() ``` {% endhint %} Even if an artifact is created externally, it can be treated like any other artifact produced by ZenML steps - with all the functionalities described above! {% hint style="info" %} It is also possible to use these functions inside your ZenML steps. However, it is usually cleaner to return the artifacts as outputs of your step to save them, or to use External Artifacts to load them instead. {% endhint %} ### Linking existing data as a ZenML artifact Sometimes, data is produced completely outside of ZenML and can be conveniently stored on a given storage. A good example of this is the checkpoint files created as a side-effect of the Deep Learning model training. We know that the intermediate data of the deep learning frameworks is quite big and there is no good reason to move it around again and again, if it can be produced directly in the artifact store boundaries and later just linked to become an artifact of ZenML. Let's explore the Pytorch Lightning example to fit the model and store the checkpoints in a remote location. ```python import os from zenml.client import Client from zenml import register_artifact from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint from uuid import uuid4 # Define where the model data should be saved # use active ArtifactStore prefix = Client().active_stack.artifact_store.path # keep data separable for future runs with uuid4 folder default_root_dir = os.path.join(prefix, uuid4().hex) # Define the model and fit it model = ... trainer = Trainer( default_root_dir=default_root_dir, callbacks=[ ModelCheckpoint( every_n_epochs=1, save_top_k=-1, filename="checkpoint-{epoch:02d}" ) ], ) try: trainer.fit(model) finally: # We now link those checkpoints in ZenML as an artifact # This will create a new artifact version register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` {% hint style="info" %} The artifact produced from the preexisting data will have a `pathlib.Path` type, once loaded or passed as input to another step. {% endhint %} Even if an artifact is created and stored externally, it can be treated like any other artifact produced by ZenML steps - with all the functionalities described above! For more details and use-cases check-out detailed docs page [Register Existing Data as a ZenML Artifact](../../how-to/data-artifact-management/complex-usecases/registering-existing-data.md). ## Logging metadata for an artifact One of the most useful ways of interacting with artifacts in ZenML is the ability to associate metadata with them. [As mentioned before](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md#artifact-information), artifact metadata is an arbitrary dictionary of key-value pairs that are useful for understanding the nature of the data. As an example, one can associate the results of a model training alongside a model artifact, the shape of a table alongside a `pandas` dataframe, or the size of an image alongside a PNG file. For some artifacts, ZenML automatically logs metadata. As an example, for `pandas.Series` and `pandas.DataFrame` objects, ZenML logs the shape and size of the objects: {% tabs %} {% tab title="Python" %} ```python from zenml.client import Client # Get an artifact version (e.g. pd.DataFrame) artifact = Client().get_artifact_version('50ce903f-faa6-41f6-a95f-ff8c0ec66010') # Fetch it's metadata artifact.run_metadata["storage_size"].value # Size in bytes artifact.run_metadata["shape"].value # Shape e.g. (500,20) ``` {% endtab %} {% tab title="OSS (Dashboard)" %} The information regarding the metadata of an artifact can be found within the DAG visualizer interface on the OSS dashboard:

ZenML Artifact Control Plane.

{% endtab %} {% tab title="Cloud (Dashboard)" %} The [ZenML Pro](https://zenml.io/pro) dashboard offers advanced visualization features for artifact exploration, including a dedicated artifacts tab with metadata visualization:

ZenML Artifact Control Plane.

{% endtab %} {% endtabs %} A user can also add metadata to an artifact within a step directly using the `log_artifact_metadata` method: ```python from zenml import step, log_artifact_metadata @step def model_finetuner_step( model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray] ) -> Annotated[ ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"]) ]: """Finetunes a given model on a given dataset.""" model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata( # Artifact name can be omitted if step returns only one output artifact_name="my_model", # Passing None or omitting this will use the `latest` version version=None, # Metadata should be a dictionary of JSON-serializable values metadata={"accuracy": float(accuracy)} # A dictionary of dictionaries can also be passed to group metadata # in the dashboard # metadata = {"metrics": {"accuracy": accuracy}} ) return model ``` For further depth, there is an [advanced metadata logging guide](../../how-to/model-management-metrics/track-metrics-metadata/README.md) that goes more into detail about logging metadata in ZenML. Additionally, there is a lot more to learn about artifacts within ZenML. Please read the [dedicated data management guide](../../how-to/handle-data-artifacts/) for more information. ## Code example This section combines all the code from this section into one simple script that you can use easily:
Code Example of this Section ```python from typing import Optional, Tuple from typing_extensions import Annotated import numpy as np from sklearn.base import ClassifierMixin from sklearn.datasets import load_digits from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata from zenml import save_artifact, load_artifact from zenml.client import Client @step def versioned_data_loader_step() -> ( Annotated[ Tuple[np.ndarray, np.ndarray], ArtifactConfig( name="my_dataset", tags=["digits", "computer vision", "classification"], ), ] ): """Loads the digits dataset as a tuple of flattened numpy arrays.""" digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step def model_finetuner_step( model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray] ) -> Annotated[ ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"]), ]: """Finetunes a given model on a given dataset.""" model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline def model_finetuning_pipeline( dataset_version: Optional[str] = None, model_version: Optional[str] = None, ): client = Client() # Either load a previous version of "my_dataset" or create a new one if dataset_version: dataset = client.get_artifact_version( name_id_or_prefix="my_dataset", version=dataset_version ) else: dataset = versioned_data_loader_step() # Load the model to finetune # If no version is specified, the latest version of "my_model" is used model = client.get_artifact_version( name_id_or_prefix="my_model", version=model_version ) # Finetune the model # This automatically creates a new version of "my_model" model_finetuner_step(model=model, dataset=dataset) def main(): # Save an untrained model as first version of "my_model" untrained_model = SVC(gamma=0.001) save_artifact( untrained_model, name="my_model", version="1", tags=["SVC", "untrained"] ) # Create a first version of "my_dataset" and train the model on it model_finetuning_pipeline() # Finetune the latest model on an older version of the dataset model_finetuning_pipeline(dataset_version="1") # Run inference with the latest model on an older version of the dataset latest_trained_model = load_artifact("my_model") old_dataset = load_artifact("my_dataset", version="1") latest_trained_model.predict(old_dataset[0]) if __name__ == "__main__": main() ``` This would create the following pipeline run DAGs: **Run 1:** Create a first version of my_dataset **Run 2:** Uses a second version of my_dataset
ZenML Scarf
================ File: docs/book/user-guide/starter-guide/README.md ================ --- icon: seedling description: Kickstart your journey into MLOps with the essentials of ZenML. --- # Starter guide Welcome to the ZenML Starter Guide! If you're an MLOps engineer aiming to build robust ML platforms, or a data scientist interested in leveraging the power of MLOps, this is the perfect place to begin. Our guide is designed to provide you with the foundational knowledge of the ZenML framework and equip you with the initial tools to manage the complexity of machine learning operations.

Embarking on MLOps can be intricate. ZenML simplifies the journey.

Throughout this guide, we'll cover essential topics including: * [Creating your first ML pipeline](create-an-ml-pipeline.md) * [Understanding caching between pipeline steps](cache-previous-executions.md) * [Managing data and data versioning](manage-artifacts.md) * [Tracking your machine learning models](track-ml-models.md) Before jumping in, make sure you have a Python environment ready and `virtualenv` installed to follow along with ease. By the end, you will have completed a [starter project](starter-project.md), marking the beginning of your journey into MLOps with ZenML. Let this guide be not only your introduction to ZenML but also a foundational asset in your MLOps toolkit. Prepare your development environment, and let's get started!
ZenML Scarf
================ File: docs/book/user-guide/starter-guide/starter-project.md ================ --- description: Put your new knowledge into action with a simple starter project --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # A starter project By now, you have understood some of the basic pillars of a MLOps system: * [Pipelines and steps](create-an-ml-pipeline.md) * [Artifacts](manage-artifacts.md) * [Models](track-ml-models.md) We will now put this into action with a simple starter project. ## Get started Start with a fresh virtual environment with no dependencies. Then let's install our dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` We will then use [ZenML templates](../../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) to help us get the code we need for the project: ```bash mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults # Just in case, we install the requirements again pip install -r requirements.txt ```
Above doesn't work? Here is an alternative The starter template is the same as the [ZenML mlops starter example](https://github.com/zenml-io/zenml/tree/main/examples/mlops_starter). You can clone it like so: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ```
## What you'll learn You can either follow along in the [accompanying Jupyter notebook](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/quickstart.ipynb), or just keep reading the [README file for more instructions](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/README.md). Either way, at the end you would run three pipelines that are exemplary: * A feature engineering pipeline that loads data and prepares it for training. * A training pipeline that loads the preprocessed dataset and trains a model. * A batch inference pipeline that runs predictions on the trained model with new data. And voilà! You're now well on your way to be an MLOps expert. As a next step, try introducing the [ZenML starter template](https://github.com/zenml-io/template-starter) to your colleagues and see the benefits of a standard MLOps framework in action! ## Conclusion and next steps This marks the end of the first chapter of your MLOps journey with ZenML. Make sure you do your own experimentation with ZenML to master the basics. When ready, move on to the [production guide](../production-guide/), which is the next part of the series.
ZenML Scarf
================ File: docs/book/user-guide/starter-guide/track-ml-models.md ================ --- description: Creating a full picture of a ML model using the Model Control Plane --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Track ML models ![Walkthrough of ZenML Model Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/mcp_walkthrough.gif) As discussed in the [Core Concepts](../../getting-started/core-concepts.md), ZenML also contains the notion of a `Model`, which consists of many model versions (the iterations of the model). These concepts are exposed in the `Model Control Plane` (MCP for short). ## What is a ZenML Model? Before diving in, let's take some time to build an understanding of what we mean when we say `Model` in ZenML terms. A `Model` is simply an entity that groups pipelines, artifacts, metadata, and other crucial business data into a unified entity. In this sense, a ZenML Model is a concept that more broadly encapsulates your ML product's business logic. You may even think of a ZenML Model as a "project" or a "workspace" {% hint style="warning" %} Please note that one of the most common artifacts that is associated with a Model in ZenML is the so-called technical model, which is the actually model file/files that holds the weight and parameters of a machine learning training result. However, this is not the only artifact that is relevant; artifacts such as the training data and the predictions this model produces in production are also linked inside a ZenML Model. {% endhint %} Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, the ZenML client as well as on the [ZenML Pro](https://zenml.io/pro) dashboard. These models can be viewed within ZenML: {% tabs %} {% tab title="OSS (CLI)" %} `zenml model list` can be used to list all models. {% endtab %} {% tab title="Cloud (Dashboard)" %} The [ZenML Pro](https://zenml.io/pro) dashboard has additional capabilities, that include visualizing these models in the dashboard.

ZenML Model Control Plane.

{% endtab %} {% endtabs %} ## Configuring a model in a pipeline The easiest way to use a ZenML model is to pass a `Model` object as part of a pipeline run. This can be done easily at a pipeline or a step level, or via a [YAML config](../production-guide/configure-pipeline.md). Once you configure a pipeline this way, **all** artifacts generated during pipeline runs are automatically **linked** to the specified model. This connecting of artifacts provides lineage tracking and transparency into what data and models are used during training, evaluation, and inference. ```python from zenml import pipeline from zenml import Model model = Model( # The name uniquely identifies this model # It usually represents the business use case name="iris_classifier", # The version specifies the version # If None or an unseen version is specified, it will be created # Otherwise, a version will be fetched. version=None, # Some other properties may be specified license="Apache 2.0", description="A classification model for the iris dataset.", ) # The step configuration will take precedence over the pipeline @step(model=model) def svc_trainer(...) -> ...: ... # This configures it for all steps within the pipeline @pipeline(model=model) def training_pipeline(gamma: float = 0.002): # Now this pipeline will have the `iris_classifier` model active. X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() # In the YAML the same can be done; in this case, the # passing to the decorators is not needed # model: # name: iris_classifier # license: "Apache 2.0" # description: "A classification model for the iris dataset." ``` The above will establish a **link between all artifacts that pass through this ZenML pipeline and this model**. This includes the **technical model** which is what comes out of the `svc_trainer` step. You will be able to see all associated artifacts and pipeline runs, all within one view. Furthermore, this pipeline run and all other pipeline runs that are configured with this model configuration will be linked to this model as well. You can see all versions of a model, and associated artifacts and run like this: {% tabs %} {% tab title="OSS (CLI)" %} `zenml model version list ` can be used to list all versions of a particular model. The following commands can be used to list the various pipeline runs associated with a model: * `zenml model version runs ` The following commands can be used to list the various artifacts associated with a model: * `zenml model version data_artifacts ` * `zenml model version model_artifacts ` * `zenml model version deployment_artifacts ` {% endtab %} {% tab title="Cloud (Dashboard)" %} The [ZenML Pro](https://zenml.io/pro) dashboard has additional capabilities, that include visualizing all associated runs and artifacts for a model version:
ZenML Model Versions List.

ZenML Model versions List.

{% endtab %} {% endtabs %} ## Fetching the model in a pipeline When configured at the pipeline or step level, the model will be available through the [StepContext](../../how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md) or [PipelineContext](../../how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md). ```python from zenml import get_step_context, get_pipeline_context, step, pipeline @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Annotated[ClassifierMixin, "trained_model"]: # This will return the model specified in the # @pipeline decorator. In this case, the production version of # the `iris_classifier` will be returned in this case. model = get_step_context().model ... @pipeline( model=Model( # The name uniquely identifies this model name="iris_classifier", # Pass the stage you want to get the right model version="production", ), ) def training_pipeline(gamma: float = 0.002): # Now this pipeline will have the production `iris_classifier` model active. model = get_pipeline_context().model X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) ``` ## Logging metadata to the `Model` object [Just as one can associate metadata with artifacts](manage-artifacts.md#logging-metadata-for-an-artifact), models too can take a dictionary of key-value pairs to capture their metadata. This is achieved using the `log_model_metadata` method: ```python from zenml import get_step_context, step, log_model_metadata @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Annotated[ClassifierMixin, "sklearn_classifier"],: # Train and score model ... model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) model = get_step_context().model log_model_metadata( # Model name can be omitted if specified in the step or pipeline context model_name="iris_classifier", # Passing None or omitting this will use the `latest` version version=None, # Metadata should be a dictionary of JSON-serializable values metadata={"accuracy": float(accuracy)} # A dictionary of dictionaries can also be passed to group metadata # in the dashboard # metadata = {"metrics": {"accuracy": accuracy}} ) ``` {% tabs %} {% tab title="Python" %} ```python from zenml.client import Client # Get an artifact version (in this the latest `iris_classifier`) model_version = Client().get_model_version('iris_classifier') # Fetch it's metadata model_version.run_metadata["accuracy"].value ``` {% endtab %} {% tab title="Cloud (Dashboard)" %} The [ZenML Pro](https://zenml.io/pro) dashboard offers advanced visualization features for artifact exploration, including a dedicated artifacts tab with metadata visualization:

ZenML Artifact Control Plane.

{% endtab %} {% endtabs %} Choosing [log metadata with artifacts](manage-artifacts.md#logging-metadata-for-an-artifact) or model versions depends on the scope and purpose of the information you wish to capture. Artifact metadata is best for details specific to individual outputs, while model version metadata is suitable for broader information relevant to the overall model. By utilizing ZenML's metadata logging capabilities and special types, you can enhance the traceability, reproducibility, and analysis of your ML workflows. Once metadata has been logged to a model, we can retrieve it easily with the client: ```python from zenml.client import Client client = Client() model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"].value) ``` For further depth, there is an [advanced metadata logging guide](../../how-to/model-management-metrics/track-metrics-metadata/README.md) that goes more into detail about logging metadata in ZenML. ## Using the stages of a model A model's versions can exist in various stages. These are meant to signify their lifecycle state: * `staging`: This version is staged for production. * `production`: This version is running in a production setting. * `latest`: The latest version of the model. * `archived`: This is archived and no longer relevant. This stage occurs when a model moves out of any other stage. {% tabs %} {% tab title="Python SDK" %} ```python from zenml import Model # Get the latest version of a model model = Model( name="iris_classifier", version="latest" ) # Get `my_version` version of a model model = Model( name="iris_classifier", version="my_version", ) # Pass the stage into the version field # to get the `staging` model model = Model( name="iris_classifier", version="staging", ) # This will set this version to production model.set_stage(stage="production", force=True) ``` {% endtab %} {% tab title="CLI" %} ```shell # List staging models zenml model version list --stage staging # Update to production zenml model version update -s production ``` {% endtab %} {% tab title="Cloud (Dashboard)" %} The [ZenML Pro](https://zenml.io/pro) dashboard has additional capabilities, that include easily changing the stage: ![ZenML Pro Transition Model Stages](../../.gitbook/assets/dcp\_transition\_stage.gif) {% endtab %} {% endtabs %} ZenML Model and versions are some of the most powerful features in ZenML. To understand them in a deeper way, read the [dedicated Model Management](../../how-to/model-management-metrics/model-control-plane/README.md) guide.
ZenML Scarf
This file is a merged representation of the entire codebase, combining all repository files into a single document. Generated by Repomix on: 2025-02-06T16:56:10.199Z ================================================================ File Summary ================================================================ Purpose: -------- This file contains a packed representation of the entire repository's contents. It is designed to be easily consumable by AI systems for analysis, code review, or other automated processes. File Format: ------------ The content is organized as follows: 1. This summary section 2. Repository information 3. Directory structure 4. Multiple file entries, each consisting of: a. A separator line (================) b. The file path (File: path/to/file) c. Another separator line d. The full contents of the file e. A blank line Usage Guidelines: ----------------- - This file should be treated as read-only. Any changes should be made to the original repository files, not this packed version. - When processing this file, use the file path to distinguish between different files in the repository. - Be aware that this file may contain sensitive information. Handle it with the same level of security as you would the original repository. Notes: ------ - Some files may have been excluded based on .gitignore rules and Repomix's configuration. - Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files. Additional Info: ---------------- ================================================================ Directory Structure ================================================================ docs/ book/ getting-started/ deploying-zenml/ custom-secret-stores.md deploy-using-huggingface-spaces.md deploy-with-custom-image.md README.md secret-management.md zenml-pro/ core-concepts.md organization.md pro-api.md README.md roles.md teams.md tenants.md core-concepts.md installation.md system-architectures.md ================================================================ Files ================================================================ ================ File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md ================ --- description: Learning how to develop a custom secret store. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Custom secret stores The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting _only the secrets values_ for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` core module and looks more or less like this: ```python class SecretsStoreInterface(ABC): """ZenML secrets store interface. All ZenML secrets stores must implement the methods in this interface. """ # --------------------------------- # Initialization and configuration # --------------------------------- @abstractmethod def _initialize(self) -> None: """Initialize the secrets store. This method is called immediately after the secrets store is created. It should be used to set up the backend (database, connection etc.). """ # --------- # Secrets # --------- @abstractmethod def store_secret_values( self, secret_id: UUID, secret_values: Dict[str, str], ) -> None: """Store secret values for a new secret. Args: secret_id: ID of the secret. secret_values: Values for the secret. """ @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: """Get the secret values for an existing secret. Args: secret_id: ID of the secret. Returns: The secret values. Raises: KeyError: if no secret values for the given ID are stored in the secrets store. """ @abstractmethod def update_secret_values( self, secret_id: UUID, secret_values: Dict[str, str], ) -> None: """Updates secret values for an existing secret. Args: secret_id: The ID of the secret to be updated. secret_values: The new secret values. Raises: KeyError: if no secret values for the given ID are stored in the secrets store. """ @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Deletes secret values for an existing secret. Args: secret_id: The ID of the secret. Raises: KeyError: if no secret values for the given ID are stored in the secrets store. """ ``` {% hint style="info" %} This is a slimmed-down version of the real interface which aims to highlight the abstraction layer. In order to see the full definition and get the complete docstrings, please check the [SDK docs](https://sdkdocs.zenml.io/latest/core\_code\_docs/core-zen\_stores/#zenml.zen\_stores.secrets\_stores.secrets\_store\_interface.SecretsStoreInterface) . {% endhint %} ## Build your own custom secrets store If you want to create your own custom secrets store implementation, you can follow the following steps: 1. Create a class that inherits from the `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` base class and implements the `abstractmethod`s shown in the interface above. Use `SecretsStoreType.CUSTOM` as the `TYPE` value for your secrets store class. 2. If you need to provide any configuration, create a class that inherits from the `SecretsStoreConfiguration` class and add your configuration parameters there. Use that as the `CONFIG_TYPE` value for your secrets store class. 3. To configure the ZenML server to use your custom secrets store, make sure your code is available in the container image that is used to run the ZenML server. Then, use environment variables or helm chart values to configure the ZenML server to use your custom secrets store, as covered in the [deployment guide](./README.md).
ZenML Scarf
================ File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md ================ --- description: Deploying ZenML to Huggingface Spaces. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Deploy using HuggingFace Spaces A quick way to deploy ZenML and get started is to use [HuggingFace Spaces](https://huggingface.co/spaces). HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead. {% hint style="info" %} If you are planning to use HuggingFace Spaces for production use, make sure you have [persistent storage turned on](https://huggingface.co/docs/hub/en/spaces-storage) so as to prevent loss of data. See our [other deployment options](./README.md) if you want alternative options. {% endhint %} ![ZenML on HuggingFace Spaces -- default deployment](../../.gitbook/assets/hf-spaces-chart.png) In this diagram, you can see what the default deployment of ZenML on HuggingFace looks like. ## Deploying ZenML on HuggingFace Spaces You can deploy ZenML on HuggingFace Spaces with just a few clicks: [![](https://huggingface.co/datasets/huggingface/badges/raw/main/deploy-to-spaces-lg.svg)](https://huggingface.co/new-space?template=zenml/zenml) To set up your ZenML app, you need to specify three main components: the Owner (either your personal account or an organization), a Space name, and the Visibility (a bit lower down the page). Note that the space visibility needs to be set to 'Public' if you wish to connect to the ZenML server from your local machine. ![HuggingFace Spaces SDK interface](../../.gitbook/assets/hf-spaces-sdk.png) You have the option here to select a higher-tier machine to use for your server. The advantage of selecting a paid CPU instance is that it is not subject to auto-shutdown policies and thus will stay up as long as you leave it up. In order to make use of a persistent CPU, you'll likely want to create and set up a MySQL database to connect to (see below). To personalize your Space's appearance, such as the title, emojis, and colors, navigate to "Files and Versions" and modify the metadata in your README.md file. Full information on Spaces configuration parameters can be found on the HuggingFace [documentation reference guide](https://huggingface.co/docs/hub/spaces-config-reference). After creating your Space, you'll notice a 'Building' status along with logs displayed on the screen. When this switches to 'Running', your Space is ready for use. If the ZenML login UI isn't visible, try refreshing the page. In the upper-right hand corner of your space you'll see a button with three dots which, when you click on it, will offer you a menu option to "Embed this Space". (See [the HuggingFace documentation](https://huggingface.co/docs/hub/spaces-embed) for more details on this feature.) Copy the "Direct URL" shown in the box that you can now see on the screen. This should look something like this: `https://-.hf.space`. Open that URL and follow the instructions to initialize your ZenML server and set up an initial admin user account. ## Connecting to your ZenML Server from your local machine Once you have your ZenML server up and running, you can connect to it from your local machine. To do this, you'll need to get your Space's 'Direct URL' (see above). {% hint style="warning" %} Your Space's URL will only be available and usable for connecting from your local machine if the visibility of the space is set to 'Public'. {% endhint %} You can use the 'Direct URL' to connect to your ZenML server from your local machine with the following CLI command (after installing ZenML, and using your custom URL instead of the placeholder): ```shell zenml login '' ``` You can also use the Direct URL in your browser to use the ZenML dashboard as a fullscreen application (i.e. without the HuggingFace Spaces wrapper around it). ## Extra configuration options By default, the ZenML application will be configured to use an SQLite non-persistent database. If you want to use a persistent database, you can configure this by amending the `Dockerfile` to your Space's root directory. For full details on the various parameters you can change, see [our reference documentation](deploy-with-docker.md#advanced-server-configuration-options) on configuring ZenML when deployed with Docker. {% hint style="info" %} If you are using the space just for testing and experimentation, you don't need to make any changes to the configuration. Everything will work out of the box. {% endhint %} You can also use an external secrets backend together with your HuggingFace Spaces as described in [our documentation](deploy-with-docker.md#advanced-server-configuration-options). You should be sure to use HuggingFace's inbuilt ' Repository secrets' functionality to configure any secrets you need to use in your`Dockerfile` configuration. [See the documentation](https://huggingface.co/docs/hub/spaces-sdks-docker#secret-management) for more details on how to set this up. {% hint style="warning" %} If you wish to use a cloud secrets backend together with ZenML for secrets management, **you must update your password** on your ZenML Server on the Dashboard. This is because the default user created by the HuggingFace Spaces deployment process has no password assigned to it and as the Space is publicly accessible (since the Space is public) _potentially anyone could access your secrets without this extra step_. To change your password navigate to the Settings page by clicking the button in the upper right-hand corner of the Dashboard and then click 'Update Password'. {% endhint %} ## Troubleshooting If you are having trouble with your ZenML server on HuggingFace Spaces, you can view the logs by clicking on the "Open Logs" button at the top of the space. This will give you more context of what's happening with your server. If you have any other issues, please feel free to reach out to us on our [Slack channel](https://zenml.io/slack/) for more support. ## Upgrading your ZenML Server on HF Spaces The default space will use the latest version of ZenML automatically. If you want to update your version, you can simply select the 'Factory reboot' option within the 'Settings' tab of the space. Note that this will wipe any data contained within the space and so if you are not using a MySQL persistent database (as described above) you will lose any data contained within your ZenML deployment on the space. You can also configure the space to use an earlier version by updating the `Dockerfile`'s `FROM` import statement at the very top.
ZenML Scarf
================ File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md ================ --- description: Deploying ZenML with custom Docker images. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Deploy with custom images In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image: * You have implemented a custom artifact store for which you want to enable [artifact visualizations](../../how-to/data-artifact-management/visualize-artifacts/README.md) or [step logs](../../../how-to/setting-up-a-project-repository/best-practices.md#logging) in your dashboard. * You have forked the ZenML repository and want to deploy a ZenML server based on your own fork because you made changes to the server / database logic. {% hint style="warning" %} Deploying ZenML with custom Docker images is only possible for [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md) deployments. {% endhint %} ### Build and Push Custom ZenML Server Docker Image Here is how you can build a custom ZenML server Docker image: 1. Set up a container registry of your choice. E.g., as an indivial developer you could create a free [Docker Hub](https://hub.docker.com/) account and then set up a free Docker Hub repository. 2. Clone ZenML (or your ZenML fork) and checkout the branch that you want to deploy, e.g., if you want to deploy ZenML version 0.41.0, run ```bash git checkout release/0.41.0 ``` 3. Copy the [ZenML base.Dockerfile](https://github.com/zenml-io/zenml/blob/main/docker/base.Dockerfile), e.g.: ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` 4. Modify the copied Dockerfile: * Add additional dependencies: ```bash RUN pip install ``` * (Forks only) install local files instead of official ZenML: ```bash RUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] ``` 5. Build and push an image based on your Dockerfile: ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` {% hint style="info" %} If you want to verify your custom image locally, you can follow the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) section below to deploy the ZenML server locally first. {% endhint %} ### Deploy ZenML with your custom image Next, adjust your preferred deployment strategy to use the custom Docker image you just built. #### Deploy a custom ZenML image via Docker To deploy your custom image via Docker, first familiarize yourself with the general [ZenML Docker Deployment Guide](deploy-with-docker.md). To use your own image, follow the general guide step by step but replace all mentions of `zenmldocker/zenml-server` with your custom image reference `/:`. E.g.: * To run the ZenML server with Docker based on your custom image, do ```bash docker run -it -d -p 8080:8080 --name zenml /: ``` * To use `docker-compose`, adjust your `docker-compose.yml`: ```yaml services: zenml: image: /: ``` #### Deploy a custom ZenML image via Helm To deploy your custom image via Helm, first familiarize yourself with the general [ZenML Helm Deployment Guide](deploy-with-helm.md). To use your own image, the only thing you need to do differently is to modify the `image` section of your `values.yaml` file: ```yaml zenml: image: repository: / tag: ```
ZenML Scarf
================ File: docs/book/getting-started/deploying-zenml/README.md ================ --- icon: rocket-launch description: Why do we need to deploy ZenML? --- # Deploying ZenML ![ZenML OSS server deployment architecture](../../.gitbook/assets/oss_simple_deployment.png) Moving your ZenML Server to a production environment offers several benefits over staying local: 1. **Scalability**: Production environments are designed to handle large-scale workloads, allowing your models to process more data and deliver faster results. 2. **Reliability**: Production-grade infrastructure ensures high availability and fault tolerance, minimizing downtime and ensuring consistent performance. 3. **Collaboration**: A shared production environment enables seamless collaboration between team members, making it easier to iterate on models and share insights. Despite these advantages, transitioning to production can be challenging due to the complexities involved in setting up the needed infrastructure. ## Components A ZenML deployment consists of multiple infrastructure components: - [FastAPI server](https://github.com/zenml-io/zenml/tree/main/src/zenml/zen_server) backed with a SQLite or MySQL database - [Python Client](https://github.com/zenml-io/zenml/tree/main/src/zenml) - An [open-source companion ReactJS](https://github.com/zenml-io/zenml-dashboard) dashboard - (Optional) [ZenML Pro API + Database + ZenML Pro dashboard](../system-architectures.md) You can read more in-depth about the system architecture of ZenML [here](../system-architectures.md). This documentation page will focus on the components required to deploy ZenML OSS.
Details on the ZenML Python Client The ZenML client is a Python package that you can install on your machine. It is used to interact with the ZenML server. You can install it using the `pip` command as outlined [here](../installation.md). This Python package gives you [the `zenml` command-line interface](https://sdkdocs.zenml.io/latest/cli/) which you can use to interact with the ZenML server for common tasks like managing stacks, setting up secrets, and so on. It also gives you the general framework that lets you [author and deploy pipelines](../../user-guide/starter-guide/README.md) and so forth. If you want to have more fine-grained control and access to the metadata that ZenML manages, you can use the Python SDK to access the API. This allows you to create your own custom automations and scripts and is the most common way teams access the metadata stored in the ZenML server. The full documentation for the Python SDK can be found [here](https://sdkdocs.zenml.io/latest/). The full HTTP [API documentation](../../reference/api-reference.md) can also be found by adding the `/doc` suffix to the URL when accessing your deployed ZenML server.
### Deployment scenarios When you first get started with ZenML, you have the following architecture on your machine. ![ZenML default local configuration](../../.gitbook/assets/Scenario1.png) The SQLite database that you can see in this diagram is used to store information about pipelines, pipeline runs, stacks, and other configurations. This default setup allows you to get started and try out the core features but you won't be able to use cloud-based components like serverless orchestrators and so on. Users can run the `zenml login --local` command to spin up a local ZenML OSS server to serve the dashboard. For the local OSS server option, the `zenml login --local` command implicitly connects the client to the server. The diagram for this looks as follows: ![ZenML with a local ZenML OSS Server](../../.gitbook/assets/Scenario2.png) In order to move into production, the ZenML server needs to be deployed somewhere centrally so that the different cloud stack components can read from and write to the server. Additionally, this also allows all your team members to connect to it and share stacks and pipelines. ![ZenML centrally deployed for multiple users](../../.gitbook/assets/Scenario3.2.png) You connect to your deployed ZenML server using the `zenml login` command and then you have the full benefits and power of ZenML. You can use all the cloud-based components, your metadata will be stored and synchronized across all the users of the server and you can leverage features like centralized logs storage and pipeline artifact visualization. ## How to deploy ZenML Deploying the ZenML Server is a crucial step towards transitioning to a production-grade environment for your machine learning projects. By setting up a deployed ZenML Server instance, you gain access to powerful features, allowing you to use stacks with remote components, centrally track progress, collaborate effectively, and achieve reproducible results. Currently, there are two main options to access a deployed ZenML server: 1. **Managed deployment:** With [ZenML Pro](../zenml-pro/README.md) offering you can utilize a control plane to create ZenML servers, also known as [tenants](../zenml-pro/tenants.md). These tenants are managed and maintained by ZenML's dedicated team, alleviating the burden of server management from your end. Importantly, your data remains securely within your stack, and ZenML's role is primarily to handle tracking of metadata and server maintenance. 2. **Self-hosted Deployment:** Alternatively, you have the ability to deploy ZenML on your own self-hosted environment. This can be achieved through various methods, including using [Docker](./deploy-with-docker.md), [Helm](./deploy-with-helm.md), or [HuggingFace Spaces](./deploy-using-huggingface-spaces.md). We also offer our Pro version for self-hosted deployments, so you can use our full paid feature-set while staying fully in control with an air-gapped solution on your infrastructure. Both options offer distinct advantages, allowing you to choose the deployment approach that best aligns with your organization's needs and infrastructure preferences. Whichever path you select, ZenML facilitates a seamless and efficient way to take advantage of the ZenML Server and enhance your machine learning workflows for production-level success. ### Options for deploying ZenML Documentation for the various deployment strategies can be found in the following pages below (in our 'how-to' guides):
Deploying ZenML using ZenML ProDeploying ZenML using ZenML Pro.deploy-with-zenml-cli.md
Deploy with DockerDeploying ZenML in a Docker container.deploy-with-docker.md
Deploy with HelmDeploying ZenML in a Kubernetes cluster with Helm.deploy-with-helm.md
Deploy with HuggingFace SpacesDeploying ZenML to Hugging Face Spaces.deploy-with-hugging-face-spaces.md
ZenML Scarf
================ File: docs/book/getting-started/deploying-zenml/secret-management.md ================ --- description: Configuring the secrets store. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Secret store configuration and management ## Centralized secrets store ZenML provides a centralized secrets management system that allows you to register and manage secrets in a secure way. The metadata of the ZenML secrets (e.g. name, ID, owner, scope etc.) is always stored in the ZenML server database, while the actual secret values are stored and managed separately, through the ZenML Secrets Store. This allows for a flexible deployment strategy that meets the security and compliance requirements of your organization. In a local ZenML deployment, secret values are also stored in the local SQLite database. When connected to a remote ZenML server, the secret values are stored in the secrets management back-end that the server's Secrets Store is configured to use, while all access to the secrets is done through the ZenML server API.

Basic Secrets Store Architecture

Currently, the ZenML server can be configured to use one of the following supported secrets store back-ends: * the same SQL database that the ZenML server is using to store secrets metadata as well as other managed objects such as pipelines, stacks, etc. This is the default option. * the AWS Secrets Manager * the GCP Secret Manager * the Azure Key Vault * the HashiCorp Vault * a custom secrets store back-end implementation is also supported ## Configuration and deployment Configuring the specific secrets store back-end that the ZenML server uses is done at deployment time. This involves deciding on one of the supported back-ends and authentication mechanisms and configuring the ZenML server with the necessary credentials to authenticate with the back-end. The ZenML secrets store reuses the [ZenML Service Connector](../../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md) authentication mechanisms to authenticate with the secrets store back-end. This means that the same authentication methods and configuration parameters that are supported by the available Service Connectors are also reflected in the ZenML secrets store configuration. It is recommended to practice the principle of least privilege when configuring the ZenML secrets store and to use credentials with the documented minimum required permissions to access the secrets store back-end. The ZenML secrets store configured for the ZenML Server can be updated at any time by updating the ZenML Server configuration and redeploying the server. This allows you to easily switch between different secrets store back-ends and authentication mechanisms. However, it is recommended to follow [the documented secret store migration strategy](secret-management.md#secrets-migration-strategy) to minimize downtime and to ensure that existing secrets are also properly migrated, in case the location where secrets are stored in the back-end changes. For more information on how to deploy a ZenML server and configure the secrets store back-end, refer to your deployment strategy inside the deployment guide. ## Backup secrets store The ZenML Server deployment may be configured to optionally connect to _a second Secrets Store_ to provide additional features such as high-availability, backup and disaster recovery as well as an intermediate step in the process of migrating [secrets from one secrets store location to another](secret-management.md#secrets-migration-strategy). For example, the primary Secrets Store may be configured to use the internal database, while the backup Secrets Store may be configured to use the AWS Secrets Manager. Or two different AWS Secrets Manager accounts or regions may be used. {% hint style="warning" %} Always make sure that the backup Secrets Store is configured to use a different location than the primary Secrets Store. The location can be different in terms of the Secrets Store back-end type (e.g. internal database vs. AWS Secrets Manager) or the actual location of the Secrets Store back-end (e.g. different AWS Secrets Manager account or region, GCP Secret Manager project or Azure Key Vault's vault). Using the same location for both the primary and backup Secrets Store will not provide any additional benefits and may even result in unexpected behavior. {% endhint %} When a backup secrets store is in use, the ZenML Server will always attempt to read and write secret values from/to the primary Secrets Store first while ensuring to keep the backup Secrets Store in sync. If the primary Secrets Store is unreachable, if the secret values are not found there or any otherwise unexpected error occurs, the ZenML Server falls back to reading and writing from/to the backup Secrets Store. Only if the backup Secrets Store is also unavailable, the ZenML Server will return an error. In addition to the hidden backup operations, users can also explicitly trigger a backup operation by using the `zenml secret backup` CLI command. This command will attempt to read all secrets from the primary Secrets Store and write them to the backup Secrets Store. Similarly, the `zenml secret restore` CLI command can be used to restore secrets from the backup Secrets Store to the primary Secrets Store. These CLI commands are useful for migrating secrets from one Secrets Store to another. ## Secrets migration strategy Sometimes you may need to change the external provider or location where secrets values are stored by the Secrets Store. The immediate implication of this is that the ZenML server will no longer be able to access existing secrets with the new configuration until they are also manually copied to the new location. Some examples of such changes include: * switching Secrets Store back-end types (e.g. from internal SQL database to AWS Secrets Manager or Azure Key Vault) * switching back-end locations (e.g. changing the AWS Secrets Manager account or region, GCP Secret Manager project or Azure Key Vault's vault). In such cases, it is not sufficient to simply reconfigure and redeploy the ZenML server with the new Secrets Store configuration. This is because the ZenML server will not automatically migrate existing secrets to the new location. Instead, you should follow a specific migration strategy to ensure that existing secrets are also properly migrated to the new location with minimal, even zero downtime. The secrets migration process makes use of the fact that [a secondary Secrets Store](secret-management.md#backup-secrets-store) can be configured for the ZenML server for backup purposes. This secondary Secrets Store is used as an intermediate step in the migration process. The migration process is as follows (we'll refer to the Secrets Store that is currently in use as _Secrets Store A_ and the Secrets Store that will be used after the migration as _Secrets Store B_): 1. Re-configure the ZenML server to use _Secrets Store B_ as the secondary Secrets Store. 2. Re-deploy the ZenML server. 3. Use the `zenml secret backup` CLI command to back up all secrets from _Secrets Store A_ to _Secrets Store B_. You don't have to worry about secrets that are created or updated by users during or after this process, as they will be automatically backed up to _Secrets Store B_. If you also wish to delete secrets from _Secrets Store A_ after they are successfully backed up to _Secrets Store B_, you should run `zenml secret backup --delete-secrets` instead. 4. Re-configure the ZenML server to use _Secrets Store B_ as the primary Secrets Store and remove _Secrets Store A_ as the secondary Secrets Store. 5. Re-deploy the ZenML server. This migration strategy is not necessary if the actual location of the secrets values in the Secrets Store back-end does not change. For example: * updating the credentials used to authenticate with the Secrets Store back-end before or after they expire * switching to a different authentication method to authenticate with the same Secrets Store back-end (e.g. switching from an IAM account secret key to an IAM role in the AWS Secrets Manager) If you are a [ZenML Pro](https://zenml.io/pro) user, you can configure your cloud backend based on your [deployment scenario](../system-architectures.md).
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/core-concepts.md ================ # ZenML Pro Core Concepts In ZenML Pro, there is a slightly different entity hierarchy as compared to the open-source ZenML framework. This document walks you through the key differences and new concepts that are only available for Pro users. ![Image showing the entity hierarchy in ZenML Pro](../../.gitbook/assets/org_hierarchy_pro.png) The image above shows the hierarchy of concepts in ZenML Pro. - At the top level is your [**Organization**](./organization.md). An organization is a collection of users, teams, and tenants. - Each [**Tenant**](./tenants.md) is an isolated deployment of a ZenML server. It contains all the resources for your project or team. - [**Teams**](./teams.md) are groups of users within an organization. They help in organizing users and managing access to resources. - **Users** are single individual accounts on a ZenML Pro instance. - [**Roles**](./roles.md) are used to control what actions users can perform within a tenant or inside an organization. - [**Templates**](../../how-to/trigger-pipelines/README.md) are pipeline runs that can be re-run with a different configuration. More details about each of these concepts are available in their linked pages below:
OrganizationsLearn about managing organizations in ZenML Pro.organization.md
TenantsUnderstand how to work with tenants in ZenML Pro.tenants.md
TeamsExplore team management in ZenML Pro.teams.md
Roles & PermissionsLearn about role-based access control in ZenML Pro.roles.md
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/organization.md ================ # Organizations ZenML Pro arranges various aspects of your work experience around the concept of an **Organization**. This is the top-most level structure within the ZenML Cloud environment. Generally, an organization contains a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members to Your Organization Inviting users to your organization to work on the organization's tenants is easy. Simply click `Add Member` in the Organization settings, and give them an initial Role. The user will be sent an invitation email. If a user is part of an organization, they can utilize their login on all tenants they have authority to access. ![Image showing invite flow](../../.gitbook/assets/add_org_members.png) ## Manage Organization settings like billing and roles The billing information for your tenants is managed on the organization level, among other settings like the members in your organization and the roles they have. You can access the organization settings by clicking on your profile picture in the top right corner and selecting "Settings". ![Image showing the organization settings page](../../.gitbook/assets/org_settings.png) ## Other operations involving organizations There are a lot of other operations involving Organizations that you can perform directly through the API. You can find more information about the API by visiting [https://cloudapi.zenml.io/](https://cloudapi.zenml.io/). ![Image showing the Swagger docs](../../.gitbook/assets/cloudapi_swagger.png)
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/pro-api.md ================ --- description: > Learn how to use the ZenML Pro API. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Using the ZenML Pro API ZenML Pro offers a powerful API that allows you to interact with your ZenML resources. Whether you're using the [SaaS version](https://cloud.zenml.io) or a self-hosted ZenML Pro instance, you can leverage this API to manage tenants, organizations, users, roles, and more. The SaaS version of ZenML Pro API is hosted at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ## API Overview The ZenML Pro API is a RESTful API that follows OpenAPI 3.1.0 specifications. It provides endpoints for various resources and operations, including: - Tenant management - Organization management - User management - Role-based access control (RBAC) - Authentication and authorization ## Authentication To use the ZenML Pro API, you need to authenticate your requests. If you are logged in to your ZenML Pro account, you can use the same browser window to authenticate requests to your ZenML Pro API, directly in the OpenAPI docs. For example, for the SaaS variant, you can access the docs here: [https://cloudapi.zenml.io](https://cloudapi.zenml.io). You can make requests by being logged into ZenML Pro at [https://cloud.zenml.io](https://cloud.zenml.io). Programmatic access is not possible at the moment. ## Key API Endpoints Here are some important endpoints you can use with the ZenML Pro API: ### Tenant Management - List tenants: `GET /tenants` - Create a tenant: `POST /tenants` - Get tenant details: `GET /tenants/{tenant_id}` - Update a tenant: `PATCH /tenants/{tenant_id}` ### Organization Management - List organizations: `GET /organizations` - Create an organization: `POST /organizations` - Get organization details: `GET /organizations/{organization_id}` - Update an organization: `PATCH /organizations/{organization_id}` ### User Management - List users: `GET /users` - Get current user: `GET /users/me` - Update user: `PATCH /users/{user_id}` ### Role-Based Access Control - Create a role: `POST /roles` - Assign a role: `POST /roles/{role_id}/assignments` - Check permissions: `GET /permissions` ## Error Handling The API uses standard HTTP status codes to indicate the success or failure of requests. In case of errors, the response body will contain more details about the error, including a message and sometimes additional information. ## Rate Limiting Be aware that the ZenML Pro API may have rate limiting in place to ensure fair usage. If you exceed the rate limit, you may receive a 429 (Too Many Requests) status code. Implement appropriate backoff and retry logic in your applications to handle this scenario. Remember to refer to the complete API documentation available at [https://cloudapi.zenml.io](https://cloudapi.zenml.io) for detailed information about all available endpoints, request/response schemas, and additional features.
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/README.md ================ --- icon: cloud description: Learn about the ZenML Pro features and deployment scenarios. --- # ZenML Pro ![Walkthrough of ZenML Model Control Plane](../../.gitbook/assets/mcp_walkthrough.gif) The [Pro version of ZenML](https://zenml.io/pro) comes with a number of features that expand the functionality of the Open Source product. ZenML Pro adds a managed control plane with benefits like: - **A managed production-grade ZenML deployment**: With ZenML Pro you can deploy multiple ZenML servers called [tenants](./tenants.md). - **User management with teams**: Create [organizations](./organization.md) and [teams](./teams.md) to easily manage users at scale. - **Role-based access control and permissions**: Implement fine-grained access control using customizable [roles](./roles.md) to ensure secure and efficient resource management. - **Enhanced model and artifact control plane**: Leverage the [Model Control Plane](../../user-guide/starter-guide/track-ml-models.md) and [Artifact Control Plane](../../user-guide/starter-guide/manage-artifacts.md) for improved tracking and management of your ML assets. - **Triggers and run templates**: ZenML Pro enables you to [create and run templates](../../how-to/trigger-pipelines/README.md#run-templates). This way, you can use the dashboard or our Client/REST API to run a pipeline with updated configuration, allowing you to iterate quickly with minimal friction. - **Early-access features**: Get early access to pro-specific features such as triggers, filters, sorting, generating usage reports, and more. Learn more about ZenML Pro on the [ZenML website](https://zenml.io/pro). {% hint style="info" %} If you're interested in assessing ZenML Pro, you can simply create a [free account](https://cloud.zenml.io/?utm\_source=docs\&utm\_medium=referral\_link\&utm\_campaign=cloud\_promotion\&utm\_content=signup\_link). Learn more about ZenML Pro on the [ZenML website](https://zenml.io/pro). {% endhint %} ## Deployment scenarios: SaaS vs Self-hosted One of the most straightforward paths to start with a deployed ZenML server is to use [the SaaS version of ZenML Pro](https://zenml.io/pro). The ZenML Pro offering eliminates the need for you to dedicate time and resources to deploy and manage a ZenML server, allowing you to focus primarily on your MLOps workflows. However, ZenML Pro can also be deployed fully self-hosted. Please [book a demo](https://www.zenml.io/book-your-demo) to learn more or check out the [self-hosted deployment guide](./self-hosted.md).
TenantsTenants in ZenML Protenants.md
OrganizationsOrganizations in ZenML Proorganization.md
TeamsTeams in ZenML Proteams.md
RolesRoles in ZenML Proroles.md
Self-Hosted DeploymentsSelf-hosted ZenML Pro deploymentsself-hosted.md
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/roles.md ================ --- description: > Learn about the different roles and permissions you can assign to your team members in ZenML Pro. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # ZenML Pro: Roles and Permissions ZenML Pro offers a robust role-based access control (RBAC) system to manage permissions across your organization and tenants. This guide will help you understand the different roles available, how to assign them, and how to create custom roles tailored to your team's needs. Please note that roles can be assigned to both individual users and [teams](./teams.md). ## Organization-Level Roles At the organization level, ZenML Pro provides three predefined roles: 1. **Org Admin** - Full control over the organization - Can add members, create and update tenants - Can manage billing information - Can assign roles to other members 2. **Org Editor** - Can manage tenants and teams - Cannot access subscription information - Cannot delete the organization 3. **Org Viewer** - Can view tenants within the organization - Read-only permissions ![Organization Roles](../../.gitbook/assets/org_members.png) To assign organization roles: 1. Navigate to the Organization settings page 2. Click on the "Members" tab. Here you can update roles for existing members. 3. Use the "Add members" button to add new members ![Screenshot showing the invite modal](../../.gitbook/assets/add_org_members.png) Some points to note: - In addition to adding organization roles, you might also want to add tenant roles for people who you want to have access to a specific tenant. - An organization admin can add themselves to a tenant with any tenant role they desire. - However, an organization editor and viewer cannot add themselves to existing tenants that they are not a part of. They won't be able to view such tenants in the organization either. - Currently, you cannot create custom organization roles via the ZenML Pro dashboard. However, this is possible via the [ZenML Pro API](https://cloudapi.zenml.io/). ## Tenant-Level Roles Tenant roles determine a user's permissions within a specific ZenML tenant. There are predefined roles available, and you can also create custom roles for more granular control. ![Image showing the tenant roles](../../.gitbook/assets/role_page.png) ### Predefined Tenant Roles 1. **Admin** - Full control over the tenant - Can create, read, update, and delete all resources ![Image showing the admin role](../../.gitbook/assets/admin_role.png) 2. **Editor** - Can create, read, and share resources - Cannot modify or delete existing resources 3. **Viewer** - Read-only access to all resources and information ### Custom Roles Custom roles allow you to define specific permissions for users or groups. To create a custom role for a tenant: 1. Go to the tenant settings page ![Image showing the tenant settings page](../../.gitbook/assets/custom_role_settings_page.png) 2. Click on "Roles" in the left sidebar and Select "Add Custom Role" ![Image showing the add custom role page](../../.gitbook/assets/tenant_roles_page.png) 3. Provide a name and description for the role. Choose a base role from which to inherit permissions ![Image showing the add custom role page](../../.gitbook/assets/create_role_modal.png) 4. Edit permissions as needed ![Image showing the add custom role page](../../.gitbook/assets/assign_permissions.png) A custom role allows you to set permissions for various resources, including: - Artifacts - Models - Model Versions - Pipelines - Runs - Stacks - Components - Secrets - Service Connectors For each resource, you can define the following permissions: - Create - Read - Update - Delete - Share You can then assign this role to a user or a team on the "Members" page. #### Managing permissions for roles To manage permissions for a role: 1. Go to the Roles page in tenant settings 2. Select the role you want to modify 3. Click on "Edit Permissions" 4. Adjust permissions for each resource type as needed ![Assign Permissions](../../.gitbook/assets/assign_permissions.png) ## Sharing individual resources While roles define permission on broad resource groups, users can also share individual resources through the dashboard like this: ![Share dialog](../../.gitbook/assets/share_dialog.png) ## Best Practices 1. **Least Privilege**: Assign the minimum necessary permissions to each role. 2. **Regular Audits**: Periodically review and update role assignments and permissions. 3. **Use Custom Roles**: Create custom roles for teams or projects with specific needs. 4. **Document Roles**: Maintain documentation of your custom roles and their intended use. By leveraging ZenML Pro's role-based access control, you can ensure that your team members have the right level of access to resources, maintaining security while enabling collaboration across your MLOps projects.
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/teams.md ================ --- description: > Learn about Teams in ZenML Pro and how they can be used to manage groups of users across your organization and tenants. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Organize users in Teams ZenML Pro introduces the concept of Teams to help you manage groups of users efficiently. A team is a collection of users that acts as a single entity within your organization and tenants. This guide will help you understand how teams work, how to create and manage them, and how to use them effectively in your MLOps workflows. ## Understanding Teams Teams in ZenML Pro offer several key benefits: 1. **Group Management**: Easily manage permissions for multiple users at once. 2. **Organizational Structure**: Reflect your company's structure or project teams in ZenML. 3. **Simplified Access Control**: Assign roles to entire teams rather than individual users. ## Creating and Managing Teams Teams are created at the organization level and can be assigned roles within tenants, similar to individual users. To create a team: 1. Navigate to the Organization settings page 2. Click on the "Teams" tab 3. Use the "Add team" button to add a new team ![Create Team](../../.gitbook/assets/create_team.png) When creating a team, you'll need to provide: - Team name - Description (optional) - Initial team members ## Adding Users to Teams To add users to an existing team: 1. Go to the "Teams" tab in Organization settings 2. Select the team you want to modify 3. Click on "Add Members" 4. Choose users from your organization to add to the team ![Add Team Members](../../.gitbook/assets/add_team_members.png) ## Assigning Teams to Tenants Teams can be assigned to tenants just like individual users. To add a team to a tenant: 1. Go to the tenant settings page 2. Click on "Members" tab and click on the "Teams" tab. 3. Select "Add Team" 4. Choose the team and assign a role ![Assign Team to Tenant](../../.gitbook/assets/assign_team_to_tenant.png) ## Team Roles and Permissions When you assign a role to a team within a tenant, all members of that team inherit the permissions associated with that role. This can be a predefined role (Admin, Editor, Viewer) or a custom role you've created. For example, if you assign the "Editor" role to a team in a specific tenant, all members of that team will have Editor permissions in that tenant. ![Team Roles](../../.gitbook/assets/team_roles.png) ## Best Practices for Using Teams 1. **Reflect Your Organization**: Create teams that mirror your company's structure or project groups. 3. **Combine with Custom Roles**: Use custom roles with teams for fine-grained access control. 4. **Regular Audits**: Periodically review team memberships and their assigned roles. 5. **Document Team Purposes**: Maintain clear documentation about each team's purpose and associated projects or tenants. By leveraging Teams in ZenML Pro, you can streamline user management, simplify access control, and better organize your MLOps workflows across your organization and tenants.
ZenML Scarf
================ File: docs/book/getting-started/zenml-pro/tenants.md ================ --- description: > Learn how to use tenants in ZenML Pro. --- {% hint style="warning" %} This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). {% endhint %} # Tenants Tenants are individual, isolated deployments of the ZenML server. Each tenant has its own set of users, roles, and resources. Essentially, everything you do in ZenML Pro revolves around a tenant: all of your pipelines, stacks, runs, connectors and so on are scoped to a tenant. ![Image showing the tenant page](../../.gitbook/assets/custom_role_settings_page.png) The ZenML server that you get through a tenant is a supercharged version of the open-source ZenML server. This means that you get all the features of the open-source version, plus some extra Pro features. ## Create a Tenant in your organization A tenant is a crucial part of your Organization and holds all of your pipelines, experiments and models, among other things. You need to have a tenant to fully utilize the benefits that ZenML Pro brings. The following is how you can create a tenant yourself: - Go to your organization page - Click on the "+ New Tenant" button ![Image showing the create tenant page](../../.gitbook/assets/new_tenant.png) - Give your tenant a name and click on the "Create Tenant" button ![Image showing the create tenant modal](../../.gitbook/assets/new_tenant_modal.png) The tenant will then be created and added to your organization. In the meantime, you can already get started with setting up your environment for the onboarding experience. The image below shows you how the overview page looks like when you are being onboarded. Follow the instructions on the screen to get started. ![Image showing the onboarding experience](../../.gitbook/assets/tenant_onboarding.png) {% hint style="info" %} You can also create a tenant through the Cloud API by navigating to https://cloudapi.zenml.io/ and using the `POST /organizations` endpoint to create a tenant. {% endhint %} ## Organizing your tenants Organizing your tenants effectively is crucial for managing your MLOps infrastructure efficiently. There are primarily two dimensions to consider when structuring your tenants: ### Organizing tenants in `staging` and `production` One common approach is to separate your tenants based on the development stage of your ML projects. This typically involves creating at least two types of tenants: 1. **Staging Tenants**: These are used for development, testing, and experimentation. They provide a safe environment where data scientists and ML engineers can: - Develop and test new pipelines - Experiment with different models and hyperparameters - Validate changes before moving to production 2. **Production Tenants**: These host your live, customer-facing ML services. They are characterized by: - Stricter access controls - More rigorous monitoring and alerting - Optimized for performance and reliability This separation allows for a clear distinction between experimental work and production-ready systems, reducing the risk of untested changes affecting live services. ![Staging vs production tenants](../../.gitbook/assets/staging-production-tenants.png) ### Organizing tenants by business logic Another approach is to create tenants based on your organization's structure or specific use cases. This method can help in: 1. **Project-based Separation**: Create tenants for different ML projects or products. For example: - Recommendation System Tenant - Natural Language Processing Tenant - Computer Vision Tenant 2. **Team-based Separation**: Align tenants with your organizational structure: - Data Science Team Tenant - ML Engineering Team Tenant - Business Intelligence Team Tenant 3. **Data Sensitivity Levels**: Separate tenants based on data classification: - Public Data Tenant - Internal Data Tenant - Highly Confidential Data Tenant This organization method offers several benefits: - Improved resource allocation and cost tracking - Better alignment with team structures and workflows - Enhanced data security and compliance management ![Business logic-based tenant organization](../../.gitbook/assets/business-logic-tenants.png) Of course, both approaches of organizing your tenants can be mixed and matched to create a structure that works best for you. ### Best Practices for Tenant Organization Regardless of the approach you choose, consider these best practices: 1. **Clear Naming Conventions**: Use consistent, descriptive names for your tenants to easily identify their purpose. 2. **Access Control**: Implement [role-based access control](./roles.md) within each tenant to manage permissions effectively. 3. **Documentation**: Maintain clear documentation about the purpose and contents of each tenant. 4. **Regular Reviews**: Periodically review your tenant structure to ensure it still aligns with your organization's needs. 5. **Scalability**: Design your tenant structure to accommodate future growth and new projects. By thoughtfully organizing your tenants, you can create a more manageable, secure, and efficient MLOps environment that scales with your organization's needs. ## Using your tenant As previously mentioned, a tenant is a supercharged ZenML server that you can use to run your pipelines, carry out experiments and perform all the other actions you expect out of your ZenML server. Some Pro-only features that you can leverage in your tenant are as follows: - [Model Control Plane](../../../../docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md) - [Artifact Control Plane](../../how-to/data-artifact-management/handle-data-artifacts/README.md) - [Ability to run pipelines from the Dashboard](../../../../docs/book/how-to/trigger-pipelines/use-templates-rest-api.md), - [Create templates out of your pipeline runs](../../../../docs/book/how-to/trigger-pipelines/use-templates-rest-api.md) and [more](https://zenml.io/pro)! ### Accessing tenant docs Every tenant has a connection URL that you can use to connect your `zenml` client to your deployed Pro server. This URL can also be used to access the OpenAPI specification for the ZenML Server. Simply visit `/docs` on your browser to see a full list of methods that you can execute from it, like running a pipeline through the REST API. ![Image showing the tenant swagger docs](../../.gitbook/assets/swagger_docs_zenml.png) Read more about to access the API [here](../../reference/api-reference.md).
ZenML Scarf
================ File: docs/book/getting-started/core-concepts.md ================ --- icon: lightbulb description: Discovering the core concepts behind ZenML. --- # Core concepts ![A diagram of core concepts of ZenML OSS](../.gitbook/assets/core_concepts_oss.png) **ZenML** is an extensible, open-source MLOps framework for creating portable, production-ready **MLOps pipelines**. It's built for data scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. In order to achieve this goal, ZenML introduces various concepts for different aspects of an ML workflow and we can categorize these concepts under three different threads:
1. DevelopmentAs a developer, how do I design my machine learning workflows?1. Development
2. ExecutionWhile executing, how do my workflows utilize the large landscape of MLOps tooling/infrastructure?2. Execution
3. ManagementHow do I establish and maintain a production-grade and efficient solution?3. Management
{% embed url="https://www.youtube.com/embed/iCB4KNjl5vs" %} If you prefer visual learning, this short video demonstrates the key concepts covered below. {% endembed %} ## 1. Development First, let's look at the main concepts which play a role during the development stage of an ML workflow with ZenML. #### Step Steps are functions annotated with the `@step` decorator. The easiest one could look like this. ```python @step def step_1() -> str: """Returns a string.""" return "world" ``` These functions can also have inputs and outputs. For ZenML to work properly, these should preferably be typed. ```python @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> str: """Combines the two strings passed in.""" combined_str = f"{input_one} {input_two}" return combined_str ``` #### Pipelines At its core, ZenML follows a pipeline-based workflow for your projects. A **pipeline** consists of a series of **steps**, organized in any order that makes sense for your use case. ![Representation of a pipeline dag.](../.gitbook/assets/01\_pipeline.png) As seen in the image, a step might use the outputs from a previous step and thus must wait until the previous step is completed before starting. This is something you can keep in mind when organizing your steps. Pipelines and steps are defined in code using Python _decorators_ or _classes_. This is where the core business logic and value of your work lives, and you will spend most of your time defining these two things. Even though pipelines are simple Python functions, you are only allowed to call steps within this function. The inputs for steps called within a pipeline can either be the outputs of previous steps or alternatively, you can pass in values directly (as long as they're JSON-serializable). ```python @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) ``` Executing the Pipeline is as easy as calling the function that you decorated with the `@pipeline` decorator. ```python if __name__ == "__main__": my_pipeline() ``` #### Artifacts Artifacts represent the data that goes through your steps as inputs and outputs and they are automatically tracked and stored by ZenML in the artifact store. They are produced by and circulated among steps whenever your step returns an object or a value. This means the data is not passed between steps in memory. Rather, when the execution of a step is completed they are written to storage, and when a new step gets executed they are loaded from storage. The serialization and deserialization logic of artifacts is defined by [Materializers](../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). #### Models Models are used to represent the outputs of a training process along with all metadata associated with that output. In other words: models in ZenML are more broadly defined as the weights as well as any associated information. Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, client as well as on the [ZenML Pro](https://zenml.io/pro) dashboard. #### Materializers Materializers define how artifacts live in between steps. More precisely, they define how data of a particular type can be serialized/deserialized, so that the steps are able to load the input data and store the output data. All materializers use the base abstraction called the `BaseMaterializer` class. While ZenML comes built-in with various implementations of materializers for different datatypes, if you are using a library or a tool that doesn't work with our built-in options, you can write [your own custom materializer](../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) to ensure that your data can be passed from step to step. #### Parameters & Settings When we think about steps as functions, we know they receive input in the form of artifacts. We also know that they produce output (in the form of artifacts, stored in the artifact store). But steps also take parameters. The parameters that you pass into the steps are also (helpfully!) stored by ZenML. This helps freeze the iterations of your experimentation workflow in time, so you can return to them exactly as you run them. On top of the parameters that you provide for your steps, you can also use different `Setting`s to configure runtime configurations for your infrastructure and pipelines. #### Model and model versions ZenML exposes the concept of a `Model`, which consists of multiple different model versions. A model version represents a unified view of the ML models that are created, tracked, and managed as part of a ZenML project. Model versions link all other entities to a centralized view. ## 2. Execution Once you have implemented your workflow by using the concepts described above, you can focus your attention on the execution of the pipeline run. #### Stacks & Components When you want to execute a pipeline run with ZenML, **Stacks** come into play. A **Stack** is a collection of **stack components**, where each component represents the respective configuration regarding a particular function in your MLOps pipeline such as orchestration systems, artifact repositories, and model deployment platforms. For instance, if you take a close look at the default local stack of ZenML, you will see two components that are **required** in every stack in ZenML, namely an _orchestrator_ and an _artifact store_. ![ZenML running code on the Local Stack.](../.gitbook/assets/02\_pipeline\_local\_stack.png) {% hint style="info" %} Keep in mind, that each one of these components is built on top of base abstractions and is completely extensible. {% endhint %} #### Orchestrator An **Orchestrator** is a workhorse that coordinates all the steps to run in a pipeline. Since pipelines can be set up with complex combinations of steps with various asynchronous dependencies between them, the orchestrator acts as the component that decides what steps to run and when to run them. ZenML comes with a default _local orchestrator_ designed to run on your local machine. This is useful, especially during the exploration phase of your project. You don't have to rent a cloud instance just to try out basic things. #### Artifact Store An **Artifact Store** is a component that houses all data that pass through the pipeline as inputs and outputs. Each artifact that gets stored in the artifact store is tracked and versioned and this allows for extremely useful features like data caching which speeds up your workflows. Similar to the orchestrator, ZenML comes with a default _local artifact store_ designed to run on your local machine. This is useful, especially during the exploration phase of your project. You don't have to set up a cloud storage system to try out basic things. #### Flavor ZenML provides a dedicated base abstraction for each stack component type. These abstractions are used to develop solutions, called **Flavors**, tailored to specific use cases/tools. With ZenML installed, you get access to a variety of built-in and integrated Flavors for each component type, but users can also leverage the base abstractions to create their own custom flavors. #### Stack Switching When it comes to production-grade solutions, it is rarely enough to just run your workflow locally without including any cloud infrastructure. Thanks to the separation between the pipeline code and the stack in ZenML, you can easily switch your stack independently from your code. For instance, all it would take you to switch from an experimental local stack running on your machine to a remote stack that employs a full-fledged cloud infrastructure is a single CLI command. ## 3. Management In order to benefit from the aforementioned core concepts to their fullest extent, it is essential to deploy and manage a production-grade environment that interacts with your ZenML installation. #### ZenML Server To use _stack components_ that are running remotely on a cloud infrastructure, you need to deploy a [**ZenML Server**](../user-guide/production-guide/deploying-zenml.md) so it can communicate with these stack components and run your pipelines. The server is also responsible for managing ZenML business entities like pipelines, steps, models, etc. ![Visualization of the relationship between code and infrastructure.](../.gitbook/assets/04\_architecture.png) #### Server Deployment In order to benefit from the advantages of using a deployed ZenML server, you can either choose to use the [**ZenML Pro SaaS offering**](zenml-pro/README.md) which provides a control plane for you to create managed instances of ZenML servers, or [deploy it in your self-hosted environment](deploying-zenml/README.md). #### Metadata Tracking On top of the communication with the stack components, the **ZenML Server** also keeps track of all the bits of metadata around a pipeline run. With a ZenML server, you are able to access all of your previous experiments with the associated details. This is extremely helpful in troubleshooting. #### Secrets The **ZenML Server** also acts as a [centralized secrets store](deploying-zenml/secret-management.md) that safely and securely stores sensitive data such as credentials used to access the services that are part of your stack. It can be configured to use a variety of different backends for this purpose, such as the AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, and Hashicorp Vault. Secrets are sensitive data that you don't want to store in your code or configure alongside your stacks and pipelines. ZenML includes a [centralized secrets store](deploying-zenml/secret-management.md) that you can use to store and access your secrets securely. #### Collaboration Collaboration is a crucial aspect of any MLOps team as they often need to bring together individuals with diverse skills and expertise to create a cohesive and effective workflow for machine learning projects. A successful MLOps team requires seamless collaboration between data scientists, engineers, and DevOps professionals to develop, train, deploy, and maintain machine learning models. With a deployed **ZenML Server**, users have the ability to create their own teams and project structures. They can easily share pipelines, runs, stacks, and other resources, streamlining the workflow and promoting teamwork. #### Dashboard The **ZenML Dashboard** also communicates with **the ZenML Server** to visualize your _pipelines_, _stacks_, and _stack components_. The dashboard serves as a visual interface to showcase collaboration with ZenML. You can invite _users_, and share your stacks with them. When you start working with ZenML, you'll start with a local ZenML setup, and when you want to transition you will need to [deploy ZenML](deploying-zenml/README.md). Don't worry though, there is a one-click way to do it which we'll learn about later. #### VS Code Extension ZenML also provides a [VS Code extension](https://marketplace.visualstudio.com/items?itemName=ZenML.zenml-vscode) that allows you to interact with your ZenML stacks, runs and server directly from your VS Code editor. If you're working on code in your editor, you can easily switch and inspect the stacks you're using, delete and inspect pipelines as well as even switch stacks.
ZenML Scarf
================ File: docs/book/getting-started/installation.md ================ --- icon: cauldron description: Installing ZenML and getting started. --- # Installation **ZenML** is a Python package that can be installed directly via `pip`: ```shell pip install zenml ``` {% hint style="warning" %} Note that ZenML currently supports **Python 3.9, 3.10, 3.11 and 3.12**. Please make sure that you are using a supported Python version. {% endhint %} ## Install with the dashboard ZenML comes bundled with a web dashboard that lives inside a [sister repository](https://github.com/zenml-io/zenml-dashboard). In order to get access to the dashboard **locally**, you need to launch the [ZenML Server and Dashboard locally](deploying-zenml/README.md). For this, you need to install the optional dependencies for the ZenML Server: ```shell pip install "zenml[server]" ``` {% hint style="info" %} We highly encourage you to install ZenML in a virtual environment. At ZenML, We like to use [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv) to manage our Python virtual environments. {% endhint %} ## Installing onto MacOS with Apple Silicon (M1, M2) A change in how forking works on Macs running on Apple Silicon means that you should set the following environment variable which will ensure that your connections to the server remain unbroken: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` You can read more about this [here](http://sealiesoftware.com/blog/archive/2017/6/5/Objective-C_and_fork_in_macOS_1013.html). This environment variable is needed if you are working with a local server on your Mac, but if you're just using ZenML as a client / CLI and connecting to a deployed server then you don't need to set it. ## Nightly builds ZenML also publishes nightly builds under the [`zenml-nightly` package name](https://pypi.org/project/zenml-nightly/). These are built from the latest [`develop` branch](https://github.com/zenml-io/zenml/tree/develop) (to which work ready for release is published) and are not guaranteed to be stable. To install the nightly build, run: ```shell pip install zenml-nightly ``` ## Verifying installations Once the installation is completed, you can check whether the installation was successful either through Bash: ```bash zenml version ``` or through Python: ```python import zenml print(zenml.__version__) ``` If you would like to learn more about the current release, please visit our [PyPi package page.](https://pypi.org/project/zenml) ## Running with Docker `zenml` is also available as a Docker image hosted publicly on [DockerHub](https://hub.docker.com/r/zenmldocker/zenml). Use the following command to get started in a bash environment with `zenml` available: ```shell docker run -it zenmldocker/zenml /bin/bash ``` If you would like to run the ZenML server with Docker: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ```
ZenML Scarf
## Deploying the server Though ZenML can run entirely as a pip package on a local system, complete with the dashboard. You can do this easily: ```shell pip install "zenml[server]" zenml login --local # opens the dashboard locally ``` However, advanced ZenML features are dependent on a centrally-deployed ZenML server accessible to other MLOps stack components. You can read more about it [here](deploying-zenml/README.md). For the deployment of ZenML, you have the option to either [self-host](deploying-zenml/README.md) it or register for a free [ZenML Pro](https://cloud.zenml.io/signup?utm\_source=docs\&utm\_medium=referral\_link\&utm\_campaign=cloud\_promotion\&utm\_content=signup\_link) account. ================ File: docs/book/getting-started/system-architectures.md ================ --- icon: building-columns description: Different variations of the ZenML architecture depending on your needs. --- # System Architecture This guide walks through the various ways that ZenML can be deployed, from self-hosted OSS, to SaaS, to self-hosted ZenML Pro! ## ZenML OSS (Self-hosted) {% hint style="info" %} This page is intended as a high level overview. To learn more about how about to deploy ZenML OSS, read [this guide](../getting-started/deploying-zenml/README.md). {% endhint %} A ZenML OSS deployment consists of the following moving pieces: * **ZenML OSS Server**: This is a FastAPI app that manages metadata of pipelines, artifacts, stacks etc. Note: In ZenML Pro, the notion of a ZenML server is replaced with what is known as a "Tenant". For all intents and purposes, consider a ZenML Tenant to be a ZenML OSS server that comes with more functionality. * **OSS Metadata Store**: This is where all ZenML tenant metadata is stored, including ML metadata such as tracking and versioning information about pipelines and models. * **OSS Dashboard**: This is a ReactJS app that shows pipelines, runs, etc. * **Secrets Store**: All secrets and credentials required to access customer infrastructure services are stored in a secure secrets store. The ZenML Pro API has access to these secrets and uses them to access customer infrastructure services on behalf of the ZenML Pro. The secrets store can be hosted either by the ZenML Pro or by the customer. ![ZenML OSS server deployment architecture](../.gitbook/assets/oss_simple_deployment.png) ZenML OSS is free with Apache 2.0 license. Learn how to deploy it [here](./deploying-zenml/README.md). {% hint style="info" %} To learn more about the core concepts for ZenML OSS, go [here](../getting-started/core-concepts.md). {% endhint %} ## ZenML Pro (SaaS or Self-hosted) {% hint style="info" %} If you're interested in assessing ZenML Pro SaaS, you can create a [free account](https://cloud.zenml.io/?utm\_source=docs\&utm\_medium=referral\_link\&utm\_campaign=cloud\_promotion\&utm\_content=signup\_link). If would like to self-host ZenML Pro, please [book a demo](https://zenml.io/book-a-demo). {% endhint %} The above deployment can be augmented with the ZenML Pro components: * **ZenML Pro Control Plane**: This is the central controlling entity of all tenants. * **Pro Dashboard**: This is a dashboard that builds on top of the OSS dashboard, and add further functionality. * **Pro Metadata Store**: This is a PostgreSQL database where all ZenML Pro related metadata is stored such as roles, permissions, teams, and tenant management related data. * **Pro Add-ons**: These are Python modules injected into the OSS Server for enhanced functionality. * **Identity Provider**: ZenML Pro offers flexible authentication options. In cloud-hosted deployments, it integrates with [Auth0](https://auth0.com/), allowing users to log in via social media or corporate credentials. For self-hosted deployments, customers can configure their own identity management solution, with ZenML Pro supporting custom OIDC provider integration. This allows organizations to leverage their existing identity infrastructure for authentication and authorization, whether using the cloud service or deploying on-premises. ![ZenML Pro deployment architecture](../.gitbook/assets/pro_deployment_simple.png) ZenML Pro offers many additional features to increase your teams productivity. No matter your specific needs, the hosting options for ZenML Pro range from easy SaaS integration to completely air-gapped deployments on your own infrastructure. You might have noticed this architecture builds on top of the ZenML OSS system architecture. Therefore, if you already have ZenML OSS deployed, it is easy to enroll it as part of a ZenML Pro deployment! The above components interact with other MLOps stack components, secrets, and data in the following scenarios described below. {% hint style="info" %} To learn more about the core concepts for ZenML Pro, go [here](../getting-started/zenml-pro/core-concepts.md) {% endhint %} ### ZenML Pro SaaS Architecture ![ZenML Pro SaaS deployment with ZenML secret store](../.gitbook/assets/cloud_architecture_scenario_1.png) For the ZenML Pro SaaS deployment case, all ZenML services are hosted on infrastructure hosted by the ZenML Team. Customer secrets and credentials required to access customer infrastructure are stored and managed by the ZenML Pro Control Plane. On the ZenML Pro infrastructure, only ML _metadata_ (e.g. pipeline and model tracking and versioning information) is stored. All the actual ML data artifacts (e.g. data produced or consumed by pipeline steps, logs and visualizations, models) are stored on the customer cloud. This can be set up quite easily by configuring an [artifact store](../component-guide/artifact-stores/artifact-stores.md) with your MLOps stack. Your tenant only needs permissions to read from this data to display artifacts on the ZenML dashboard. The tenant also needs direct access to parts of the customer infrastructure services to support dashboard control plane features such as CI/CD, triggering and running pipelines, triggering model deployments and so on. The advantage of this setup is that it is a fully-managed service, and is very easy to get started with. However, for some clients even some metadata can be sensitive; these clients should refer to the other architecture diagram.
Detailed Architecture Diagram for SaaS deployment
ZenML Pro Full SaaS deployment
ZenML Pro Full SaaS deployment with ZenML secret store
We also offer a hybrid SaaS option where customer secrets are stored on the customer side. In this case, the customer connects their own secret store directly to the ZenML server that is managed by us. All ZenML secrets used by running pipelines to access infrastructure services and resources are stored in the customer secret store. This allows users to use [service connectors](../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md) and the [secrets API](../how-to/project-setup-and-management/interact-with-secrets.md) to authenticate ZenML pipelines and the ZenML Pro to third-party services and infrastructure while ensuring that credentials are always stored on the customer side. {% endhint %} ![ZenML Pro SaaS deployment with Customer secret store](../.gitbook/assets/cloud_architecture_scenario_1_1.png)
Detailed Architecture Diagram for SaaS deployment with custom secret store configuration
ZenML Pro Full SaaS deployment with custom secret store
ZenML Pro Full SaaS deployment with customer secret store
### ZenML Pro Self-Hosted Architecture ![ZenML Pro self-hosted deployment](../.gitbook/assets/cloud_architecture_scenario_2.png) In the case of self-hosting ZenML Pro, all services, data, and secrets are deployed on the customer cloud. This is meant for customers who require completely air-gapped deployments, for the tightest security standards. [Reach out to us](mailto:cloud@zenml.io) if you want to set this up.
Detailed Architecture Diagram for self-hosted ZenML Pro deployment
ZenML Pro self-hosted deployment details
ZenML Pro self-hosted deployment details
Are you interested in ZenML Pro? [Sign up](https://cloud.zenml.io/?utm\_source=docs\&utm\_medium=referral\_link\&utm\_campaign=cloud\_promotion\&utm\_content=signup\_link) and get access to Scenario 1. with a free 14-day trial now!
ZenML Scarf