Using code example as is leads to missing model_type in config error
Explicitly using XLMRobertaTokenizer, XLMRobertaForSequenceClassification leads to missing vocab size issues between the tokenizer and the model.
hello
· Sign up or log in to comment