Spaces:
Running
Running
Simplified
Browse files
common.py
CHANGED
@@ -47,7 +47,7 @@ EVAL_DESCRIPTION = """
|
|
47 |
- Examples (Optional)
|
48 |
"""
|
49 |
|
50 |
-
DEFAULT_EVAL_PROMPT = """You are assessing a chat bot response to a user's input based on
|
51 |
|
52 |
Score:
|
53 |
A score of 1 means that the response's answer meets all of the evaluation criteria.
|
@@ -145,4 +145,4 @@ Atla currently funds this out of our own pocket. We are looking for API credits
|
|
145 |
We are training a general-purpose evaluator that you will soon be able to run in this Judge Arena. Our next step will be to open-source a powerful model that the community can use to run fast and accurate evaluations.
|
146 |
<br><br>
|
147 |
# Get in touch
|
148 |
-
Feel free to email us at [support@atla-ai.com](mailto:support@atla-ai.com) or leave feedback on our [Github](https://github.com/atla-ai/judge-arena)!"""
|
|
|
47 |
- Examples (Optional)
|
48 |
"""
|
49 |
|
50 |
+
DEFAULT_EVAL_PROMPT = """You are assessing a chat bot response to a user's input based on how well it follows the user's instructions. Your evaluation should consider fac
|
51 |
|
52 |
Score:
|
53 |
A score of 1 means that the response's answer meets all of the evaluation criteria.
|
|
|
145 |
We are training a general-purpose evaluator that you will soon be able to run in this Judge Arena. Our next step will be to open-source a powerful model that the community can use to run fast and accurate evaluations.
|
146 |
<br><br>
|
147 |
# Get in touch
|
148 |
+
Feel free to email us at [support@atla-ai.com](mailto:support@atla-ai.com) or leave feedback on our [Github](https://github.com/atla-ai/judge-arena)!"""
|