mrm8488's picture
First commit
c32ee7d
dataset: cosmos_qa
templates:
015f333d-2a15-4552-9fe3-a20bd781001e: !Template
answer_choices: null
id: 015f333d-2a15-4552-9fe3-a20bd781001e
jinja: "Based on the context and the answer, generate a question. \n\nContext:\
\ {{context}}\n\nAnswer:\n{% if label == 0 %}\n{{answer0}}\n{% elif label ==\
\ 1 %}\n{{answer1}}\n{% elif label == 2 %}\n{{answer2}}\n{% elif label == 3\
\ %}\n{{answer3}}\n{% endif %}\n|||\n{{question}}"
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- BLEU
- ROUGE
original_task: false
name: context_answer_to_question
reference: 'Template asks the model to generate questions '
08e20b79-d1c0-4717-b538-f1a313c2b7d2: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 08e20b79-d1c0-4717-b538-f1a313c2b7d2
jinja: "Read the following context and choose the best option to answer the question.\n\
Context: {{ context }}\nQuestion: {{ question }}\nOptions: \n- {{ answer_choices\
\ | join(\"\\n - \") }}\n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: description_context_question_answer_text
reference: 'Template generates the answer. Answer cues are included. '
67d6ba13-4958-4e5e-842c-ada92aead6cc: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 67d6ba13-4958-4e5e-842c-ada92aead6cc
jinja: 'Read the following context and answer the question.
Context: {{ context }}
Question: {{ question }}
Answer:
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: true
name: description_context_question_text
reference: Template generates the answer
693c47c6-f17c-417a-af70-bc20e71b4ed4: !Template
answer_choices: A ||| B ||| C ||| D
id: 693c47c6-f17c-417a-af70-bc20e71b4ed4
jinja: "Read the following context and choose the best option to answer the question.\n\
Context: {{ context }}\nQuestion: {{ question }}\nOptions: \nA. {{ answer0 }}\n\
B. {{ answer1 }}\nC. {{ answer2 }}\nD. {{ answer3 }}\n|||\n{{ answer_choices[label]\
\ }}"
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: description_context_question_answer_id
reference: Template asks the model to pick the correct answer
6b9a24cc-054e-40d6-8abf-261443122f3a: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 6b9a24cc-054e-40d6-8abf-261443122f3a
jinja: '{{ context }}
According to the above context, choose the best option to answer the following
question.
Question: {{ question }}
Options:
- {{answer_choices | join("\n - ")}}
|||
{{answer_choices[label]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_description_question_answer_text
reference: The template asks the model to generate the answer
71325300-1f16-4a68-97c7-a03457f00cc7: !Template
answer_choices: A ||| B ||| C ||| D
id: 71325300-1f16-4a68-97c7-a03457f00cc7
jinja: '{{ context }}
{{ question }}
A. {{ answer0 }}
B. {{ answer1 }}
C. {{ answer2 }}
D. {{ answer3 }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: no_prompt_id
reference: 'No prompt with context and question. '
7c30b1a1-14da-4458-95e8-c35f8de23110: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 7c30b1a1-14da-4458-95e8-c35f8de23110
jinja: '{{ context }}
Question: {{ question }}
The answer to the above question:
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: false
name: context_question_description_text
reference: Context, question, task description, and generate the answer
85e9ae2c-fbb7-47ed-980c-56da5299e9af: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 85e9ae2c-fbb7-47ed-980c-56da5299e9af
jinja: '{{ context }}
{{ question }}
- {{ answer_choices | join("\n - ") }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: no_prompt_text
reference: 'No prompt with answer choices. The template asks the model to generate
the answer. '
8a60255c-d44d-4f20-a631-ae1c0c9a7d68: !Template
answer_choices: A ||| B ||| C ||| D
id: 8a60255c-d44d-4f20-a631-ae1c0c9a7d68
jinja: '{{ context }}
According to the above context, choose the best option to answer the following
question.
Question: {{ question }}
Options:
A. {{ answer0 }}
B. {{ answer1 }}
C. {{ answer2 }}
D. {{ answer3 }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_description_question_answer_id
reference: Original task with context, question and the answer choices.
9dc80101-516d-448e-8e05-a62b4acead3b: !Template
answer_choices: A ||| B ||| C ||| D
id: 9dc80101-516d-448e-8e05-a62b4acead3b
jinja: '{{ context }}
{{ question }}
Pick the best answer from the following options:
A. {{ answer0 }}
B. {{ answer1 }}
C. {{ answer2 }}
D. {{ answer3 }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_question_description_answer_id
reference: Template asks the model to pick the correct answer
c07c459e-f1f7-409e-9da7-fe5c993a4933: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: c07c459e-f1f7-409e-9da7-fe5c993a4933
jinja: '{{ context }}
According to the above context, answer the following question.
{{ question }}
|||
{{answer_choices[label]}}'
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: true
name: context_description_question_text
reference: The template asks the model to generate the answer without any answer
cues
d5499348-5cb3-467b-a543-206b5dd9806e: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: d5499348-5cb3-467b-a543-206b5dd9806e
jinja: '{{ context }}
{{ question }}
Pick the best answer from the following options:
- {{ answer_choices | join("\n - ") }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_question_description_answer_text
reference: 'Context, question, task description, and answer choices '
e640e365-091c-491e-a87e-f529514607e5: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: e640e365-091c-491e-a87e-f529514607e5
jinja: "{{question}} \n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: false
name: only_question_answer
reference: Template with only question and generates the answer