Jaren commited on
Commit
b1ab27e
1 Parent(s): 287ea20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -1,5 +1,6 @@
1
  This model used hfl/chinese-roberta-wwm-ext-large backbone and was trained on SNLI, MNLI, DNLI, KvPI data in Chinese version.
2
  Model structures are as follows:
 
3
  class RobertaForSequenceClassification(nn.Module):
4
  def __init__(self, tagset_size):
5
  super(RobertaForSequenceClassification, self).__init__()
@@ -33,4 +34,4 @@ class RobertaClassificationHead(nn.Module):
33
  x = self.out_proj(x)
34
  return x
35
  model = RobertaForSequenceClassification(num_labels)
36
- model.load_state_dict(torch.load(args.model_save_path+'Roberta_large_model.pt', map_location=device))
 
1
  This model used hfl/chinese-roberta-wwm-ext-large backbone and was trained on SNLI, MNLI, DNLI, KvPI data in Chinese version.
2
  Model structures are as follows:
3
+ `
4
  class RobertaForSequenceClassification(nn.Module):
5
  def __init__(self, tagset_size):
6
  super(RobertaForSequenceClassification, self).__init__()
 
34
  x = self.out_proj(x)
35
  return x
36
  model = RobertaForSequenceClassification(num_labels)
37
+ model.load_state_dict(torch.load(args.model_save_path+'Roberta_large_model.pt', map_location=device))`