Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:

How do you use `is_correct` = False examples from Winogrande etc when training T0

#5
by MaximumEntropy - opened

Hi,

Thanks for making the P3 dataset so easy to use! I was wondering how examples where is_correct is False are used when training T0?

Here is an example from the Winogrande validation set where is_correct is False

{'idx': <tf.Tensor: shape=(2,), dtype=int32, numpy=array([0, 0], dtype=int32)>,
 'inputs': <tf.Tensor: shape=(34,), dtype=int32, numpy=
 array([ 8077,    47,     3,     9,   231,   394, 12177,   145,  6538,
           78,     3,   834,   373,   530,     8,  1842,  1488,     5,
          363,   405,     8,     3,   834,    16,     8,   756,  7142,
         2401,    12,    58,  8077,    42,  6538,    58], dtype=int32)>,
 'inputs_pretokenized': <tf.Tensor: shape=(), dtype=string, numpy=b'Sarah was a much better surgeon than Maria so _ always got the easier cases.\nWhat does the _ in the above sentence refer to? Sarah or Maria? '>,
 'is_correct': <tf.Tensor: shape=(), dtype=bool, numpy=False>,
 'targets': <tf.Tensor: shape=(2,), dtype=int32, numpy=array([8077,    1], dtype=int32)>,
 'targets_pretokenized': <tf.Tensor: shape=(), dtype=string, numpy=b'Sarah'>,
 'weight': <tf.Tensor: shape=(), dtype=float32, numpy=1.0>}

It seems like we'd only want to train and evaluate with correctly labeled examples with simple text-to-text fine-tuning right?

BigScience Workshop org

Hi @MaximumEntropy ,
the example you are pointing to is coming from a XXX_score_eval subset which is only used for evaluation. We are using the sibling subset XXX (i.e. without "score_eval" appended) for training and which does not contain the is_correct field, and only the correct instances.
Victor

Thanks for clarifying! However, I'm noticing just XXX contains both inputs and targets so I'm not sure I understand the purpose of the XXX_score_eval? Also, how are is_correct=False examples used in validation?

BigScience Workshop org

Nw! I hear that i can be confusing the first time :)

For evaluation, we are performing the classification tasks as "rank classification". It means that if an instance has two options or labels (i.e. yes/no, true/false, etc.), we compute the logprob under the model of all options conditioned on the input, and then taking as prediction the option with the highest score.
Essentially, we are comparing logprob(target_A|input), logprob(target_B|input), logprob(target_C|input), ...

For true task zero-shot (i.e. the setup we took a great pain to respect in T0), validation or test does not really matter since technically we are not allowed to use extra information to tune hps/prompts/early stopping/etc. So it's used in the same way I described above.

Let me know if something is not clear!

VictorSanh changed discussion status to closed

Sign up or log in to comment