text
stringlengths 0
7.89k
|
---|
# Generation evaluation |
Now that we have a sense of how to evaluate the retrieval component of our RAG |
pipeline, let's move on to the generation component. The generation component is |
responsible for generating the answer to the question based on the retrieved |
context. At this point, our evaluation starts to move into more subjective |
territory. It's harder to come up with metrics that can accurately capture the |
quality of the generated answers. However, there are some things we can do. |
As with the [retrieval evaluation](retrieval.md), we can start with a simple |
approach and then move on to more sophisticated methods. |
## Handcrafted evaluation tests |
As in the retrieval evaluation, we can start by putting together a set of |
examples where we know that our generated output should or shouldn't include |
certain terms. For example, if we're generating answers to questions about |
which orchestrators ZenML supports, we can check that the generated answers |
include terms like "Airflow" and "Kubeflow" (since we do support them) and |
exclude terms like "Flyte" or "Prefect" (since we don't (yet!) support them). |
These handcrafted tests should be driven by mistakes that you've already seen in |
the RAG output. The negative example of "Flyte" and "Prefect" showing up in the |
list of supported orchestrators, for example, shows up sometimes when you use |
GPT 3.5 as the LLM. |
![](/docs/book/.gitbook/assets/generation-eval-manual.png) |
As another example, when you make a query asking 'what is the default |
orchestrator in ZenML?' you would expect that the answer would include the word |
'local', so we can make a test case to confirm that. |
You can view our starter set of these tests |
[here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py#L28-L55). |
It's better to start with something small and simple and then expand as is |
needed. There's no need for complicated harnesses or frameworks at this stage. |
**`bad_answers` table:** |
| Question | Bad Words | |
|----------|-----------| |
| What orchestrators does ZenML support? | AWS Step Functions, Flyte, Prefect, Dagster | |
| What is the default orchestrator in ZenML? | Flyte, AWS Step Functions | |
**`bad_immediate_responses` table:** |
| Question | Bad Words | |
|----------|-----------| |
| Does ZenML support the Flyte orchestrator out of the box? | Yes | |
**`good_responses` table:** |
| Question | Good Words | |
|----------|------------| |
| What are the supported orchestrators in ZenML? Please list as many of the supported ones as possible. | Kubeflow, Airflow | |
| What is the default orchestrator in ZenML? | local | |
Each type of test then catches a specific type of mistake. For example: |
```python |
class TestResult(BaseModel): |
success: bool |
question: str |
keyword: str = "" |
response: str |
def test_content_for_bad_words( |
item: dict, n_items_retrieved: int = 5 |
) -> TestResult: |
question = item["question"] |
bad_words = item["bad_words"] |
response = process_input_with_retrieval( |
question, n_items_retrieved=n_items_retrieved |
) |
for word in bad_words: |
if word in response: |
return TestResult( |
success=False, |
question=question, |
keyword=word, |
response=response, |
) |
return TestResult(success=True, question=question, response=response) |
``` |
Here we're testing that a particular word doesn't show up in the generated |
response. If we find the word, then we return a failure, otherwise we return a |
success. This is a simple example, but you can imagine more complex tests that |
check for the presence of multiple words, or the presence of a word in a |
particular context. |
We pass these custom tests into a test runner that keeps track of how many are |
failing and also logs those to the console when they do: |
```python |
def run_tests(test_data: list, test_function: Callable) -> float: |
failures = 0 |
total_tests = len(test_data) |
for item in test_data: |