qbao775 commited on
Commit
fdc9854
1 Parent(s): 5113aa1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -1,3 +1,42 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ library_name: transformers
8
+ tags:
9
+ - logical-reasoning
10
+ - logical-equivalence
11
+ - constrastive-learning
12
  ---
13
+
14
+ # AMR-LE
15
+ This is a branch which includes the model weight for AMR-LE. AMR-LE is a model that been fine-tuned on AMR-based logic-driven augmented data. The data is formed as `(original sentence, logical equivalence sentence, logical inequivalence sentence)`. We use Abstract Meaning Representation (AMR) to automatically construct logical equivalence and logical inequivalence sentences. We use constrastive learning to train the model to learn to identify whether two sentences are logically equivalent or logically inequivalent. You are welcome to fine-tune the model weights on the dowstream tasks as logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, QNLI, RTE and QQP). We achieved #2 on the ReClor Leaderboard.
16
+
17
+ Here is the original links for AMR-LE including paper, project and leaderboard.
18
+
19
+ Paper: https://arxiv.org/abs/2305.12599
20
+
21
+ Project: https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
22
+
23
+ Leaderboard: https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347
24
+
25
+ In this repository, we trained the DeBERTa-V2-XXLarge on the sentence pair constructed by our AMR-LE. We use AMR with one logical equivalence law `(Contraposition law)` to construct different logical equivalence/inequivalence sentences.
26
+ ## How to load the model weight?
27
+ ```
28
+ from transformers import AutoModel
29
+
30
+ model = AutoModel.from_pretrained("qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition")
31
+
32
+ ```
33
+
34
+ ## Citation
35
+ ```
36
+ @article{bao2023contrastive,
37
+ title={Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text},
38
+ author={Bao, Qiming and Peng, Alex Yuxuan and Deng, Zhenyun and Zhong, Wanjun and Tan, Neset and Young, Nathan and Chen, Yang and Zhu, Yonghua and Witbrock, Michael and Liu, Jiamou},
39
+ journal={arXiv preprint arXiv:2305.12599},
40
+ year={2023}
41
+ }
42
+ ```