Datasets:
Partial evaluation formulation
Hello, great benchmark. I would like to use it with different formulation than you do, which is called "partial evaluation".
Basically the idea is that we directly substitute the pronoun and then compute the probs of completions for the two substiutions.
The problem is that I am not a native speaker so I am not 100% if my implementation is correct, I mostly took inspiration from https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/wsc273/utils.py and for the thai conversion I used LLM to help me with language rules.
Here is the code:
def is_possessive(pronoun):
# Check if the pronoun is a possessive form
return pronoun.startswith("ของ")
def add_possessive(pronoun):
return f"ของ{pronoun}"
def process_opt(option, pronoun):
return add_possessive(option) if is_possessive(pronoun) else option
def wsci(line, task_name: str):
pronoun = line["pronoun"]
quote, ending = line["text"][:line["pronoun_loc"]], line["text"][line["pronoun_loc"]+len(pronoun):]
options = [process_opt(opt, pronoun) for opt in line["options"]]
return Doc(
task_name=task_name,
query=quote,
# We have to use spacing, because of tokenization
choices=[f" {option}{ending}" for option in options],
gold_index=line["label"],
uncoditioned_prefix="",
)
Do you think it's good implementation or have I missed some linguistic aspect that I need to take care of ? Thanks in advance.
Sorry for the late response.
So, the idea of using the possessive word like “ของ” would be to specify options A and B, similar to how we add “’s” in English. I think this implementation is correct, as adding “ของ” serves a similar function to adding “’s.”
Hi! Do you have the implementation of 'partial evaluation' for Thai Winograd Schemas somewhere? I'm looking to evaluate this method as well.
but for partial scoring, shouldn't choice be something like
doc_to_choice: "{% set template = text[:pronoun_loc] %}{{[template+options[0], template+options[1]]}}"
and query be
doc_to_target: "{% set index = pronoun_loc + pronoun | length %}{{text[index:]}}"
https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/wsc273/default.yaml