Spaces:
Sleeping
Sleeping
File size: 11,987 Bytes
c73a7cb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
Guidlines = """Evaluation Criteria
--------------------------------------------------
For Instruction Following
Description:
Give gradings for the instruction following the property of each of the responses. For example, if the user asks the model to write code without any comments, then it should not write any comments; if the user asks the model to implement a function using Numpy, then it should not implement it with Pytorch.
Ratings:
No Issues
Criteria: The problem-solving approach, code, and explanation fully adhere to all the user's instructions, with complete accuracy and attention to detail.
Example: Code implements a sorting algorithm exactly as described, with perfect adherence to the required time complexity and code style guidelines.
Minor Issues
Criteria: The code, problem-solving approach, or explanation follows the instructions with only minor deviations that do not significantly impact the overall quality.
Example: Code performs the sorting correctly but uses a slightly different method than requested, or has minor formatting inconsistencies (e.g., indentation issues).
Major Issues
Criteria: The code, problem-solving approach, or explanation shows significant issues with adherence to instructions, with many components not aligning with guidelines.
Example: Code implements a completely different algorithm or fails to sort correctly, missing key steps and significantly deviating from the provided instructions.
--------------------------------------------------
For Truthfulness
Description:
Give grades on the truthfulness of each of the responses. Here, truthfulness means the correctness of the response. The generated code should solve the problem raised by the user, and give correct code implementation, i.e., no logical and syntax errors, etc.
Ratings:
No Issues
Criteria:
Code: Entirely accurate with no errors or flaws in logic/implementation. Handles all edge cases perfectly. The code runs flawlessly and solves the problem fully.
Explanation: The explanation is entirely accurate, with no factual errors or misconceptions about the code, concept, or methodology.
Example:
Code: Code implements a binary search algorithm that works correctly for all input sizes, handles edge cases like empty arrays, and runs without any errors.
Explanation: The explanation correctly details how the binary search algorithm works, why it’s optimal, and how edge cases are handled, with no inaccuracies.
Minor Issues
Criteria:
Code: Generally accurate with only minor, inconsequential errors that do not impact the solution’s functionality or correctness.
Explanation: The explanation is generally accurate, with only minor inaccuracies that do not significantly impact understanding.
Example:
Code: Code correctly sorts an array but has a minor inefficiency that doesn’t affect the overall output; all major requirements are met, but a small edge case is missed.
Explanation: The explanation of the sorting algorithm is mostly correct but overlooks a small detail about how a specific edge case is handled, which doesn’t affect the overall comprehension.
Major Issues
Criteria:
Code: Has significant accuracy issues with several errors or logical flaws that impact the effectiveness of the solution.
Explanation: The explanation has significant accuracy issues, leading to misunderstandings about the code, concept, or methodology.
Example:
Code: Code attempts to solve a problem but contains logical errors that result in incorrect output for some cases; certain edge cases cause the code to fail.
Explanation: The explanation misinterprets key aspects of the algorithm’s logic, leading to potential confusion about how the code should work.
Notes:
When a user's prompt doesn’t explicitly ask for guidance on library installs, we should not penalize the model on truthfulness for missing library install steps.
The model can be penalized when an obscure library is being used and the install steps for that are more than just “pip install module name.”
When a user’s prompt doesn’t explicitly ask for a particular efficient implementation, truthfulness should not be penalized (e.g., Fibonacci implementation using recursion should be considered accurate when the user did not ask specifically to avoid recursion).
--------------------------------------------------
For Conciseness
Description:
Rate the verbosity of each of the responses.
Ratings:
Too Short
Criteria: The response is missing key information or details necessary to fully address the task. It feels incomplete or overly simplistic, leaving significant gaps that make understanding or completing the task difficult. Critical context or explanations are omitted, making it insufficient for effective communication.
To Verbose
Criteria: The response includes unnecessary details or excessive information that is not directly relevant to the task. It feels long-winded or overly complex, making it harder to find the key points. The extra verbosity may obscure the main message, causing confusion or overwhelming the recipient with unneeded content.
Just Right
Criteria: The response provides the exact amount of detail necessary to fully address the task without unnecessary elaboration. It is clear, concise, and to the point, ensuring all relevant information is included while avoiding excessive detail.
--------------------------------------------------
For Overall Satisfaction
Description:
Grade the overall satisfaction based on the model evaluation categories that continue below (i.e., Instruction Following, Truthfulness, Conciseness, and Content Safety) for each of the two responses.
Ratings:
Amazing
Criteria: The task meets all the requirements present in all model evaluation categories without any significant issues or errors.
Pretty Good
Criteria: The task meets most of the requirements in all model evaluation categories. There may be minor, non-critical issues that do not significantly impact the overall performance.
Okay
Criteria: The task meets some of the requirements but shows noticeable issues in one or more evaluation categories.
Pretty Bad
Criteria: The task fails to meet many requirements across model evaluation categories. Significant issues exist in instruction following, truthfulness, or conciseness. Overall performance is lacking, but some elements are salvageable.
Horrible
Criteria: The task fails to meet the majority of requirements in all model evaluation categories. There are critical issues that make the output unusable or harmful. The task demonstrates poor overall performance, with very little redeeming value."""
Instruction_Following = "Instruction_Following [No issue, Minor issue, Major Issue]: Give gradings [No issue, Minor issue, Major Issue] for the instruction following property of each of the responses. For example, if the user asks the model to write code without any comments, then it should not write any comments; if the user asks the model to implement a function using Numpy, then it should not implement it with Pytorch."
Rationalefor_Instruction_Following = """Rationalefor Instruction Following
Provide a brief explanation on why the chosen score was assigned, highlighting specific aspects of the response that influenced the rating for instruction following, both positive and negative. Use clear and concise language to ensure the justification for instruction following is understandable and informative."""
Truthfulness = """Truthfulness * answer shoud be appropite one from this list [No issue, Minor issue, Major Issue].
Give gradings on the truthfulness of each of the responses. Here, truthfulness means the correctness of the response. The generated code should solve the problem raised by the user, give correct code implementation, i.e., no logical and syntax errors, etc."""
Rationale_for_Truthfulness = """Reasoning for Truthfulness Rating *
Provide a brief explanation on why the chosen score was assigned, highlighting specific aspects of the response that influenced the rating for truthfulness, both positive and negative. Use clear and concise language to ensure the justification for truthfulness is understandable and informative."""
Conciseness = """Rate the verbosity of each of the responses. answer should be appropiate one from this list [Just right, Too short, Too Long]"""
Rationale_for_Conciseness = """Reasoning for Conciseness Rating *
Provide a brief explanation on why the chosen score was assigned, highlighting specific aspects of the response that influenced the rating for conciseness, both positive and negative. Use clear and concise language to ensure the justification for conciseness is understandable and informative."""
Content_Safety = """Content Safety * answer shoud be appropite one from this list [No issue, Minor issue, Major Issue]
Assess whether the response is free from harmful content and ensures harmlessness and safety."""
Rationale_Code_Content_Safety_Rating = """Reasoning for Content Safety Rating *
Provide a brief explanation on why the chosen score was assigned, highlighting specific aspects of the response that influenced the rating for content safety, both positive and negative. Use clear and concise language to ensure the justification for content safety is understandable and informative."""
Overall_Satisfaction = """Overall Satisfaction * answer shoud be appropite one from this list [Amazing, Pretty Good, Okay, Pretty Bad, Horrible]
Grade the overall satisfaction based on the model evaluation categories above (i.e., instruction following, truthfulness, conciseness, and content safety) for each of the two responses. Give the grading using the following values:"""
Reasoning_for_Overall_Satisfaction_Rating = """Reasoning for Overall Satisfaction Rating *
Provide a brief explanation on why the chosen score was assigned, highlighting specific aspects of the response that influenced the rating for overall satisfaction, both positive and negative. Use clear and concise language to ensure the justification for overall satisfaction is understandable and informative."""
import os
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
class Avocat(BaseModel):
Instruction_Following2 : str = Field(description=Instruction_Following)
Rationalefor_Instruction_Following2: str = Field(description=Rationalefor_Instruction_Following)
Truthfulness2: str = Field(description=Truthfulness)
Rationale_for_Truthfulness2: str = Field(description=Rationale_for_Truthfulness)
Conciseness2: str = Field(description=Conciseness)
Rationale_for_Conciseness2: str = Field(description=Rationale_for_Conciseness)
Content_Safety2: str = Field(description=Content_Safety)
Rationale_Code_Content_Safety_Rating2: str = Field(description=Rationale_Code_Content_Safety_Rating)
Overall_Satisfaction2: str = Field(description=Overall_Satisfaction)
Reasoning_for_Overall_Satisfaction_Rating2: str = Field(description=Reasoning_for_Overall_Satisfaction_Rating)
parser = PydanticOutputParser(pydantic_object=Avocat)
prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template(
"""Please evealute the model responses and generate some short and sweet details that can directly address the info for given details. also all point looks like it was evaluted and written by human.
\n{format_instructions}\n{data}\n"""
)
],
input_variables=["data"],
partial_variables={
"format_instructions": parser.get_format_instructions(),
},
)
from langchain_groq import ChatGroq
chat_model = ChatGroq(
model="llama-3.1-70b-versatile",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
)
def AvocatAI(input_data):
_input = prompt.format_prompt(data=input_data)
output = chat_model(_input.to_messages())
parsed = parser.parse(output.content)
return parsed.dict() |