MichaelR207 commited on
Commit
79739e1
1 Parent(s): f58a103

Added Usage, Citation, and Contact info to the ReadMe

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -47,6 +47,36 @@ The MultiSim benchmark is a growing collection of text simplification datasets t
47
 
48
  - Sentence Simplification
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ### Languages
51
 
52
  - English
@@ -165,6 +195,8 @@ MIT License
165
 
166
  ### Citation Information
167
 
 
 
168
  #### AdminIT
169
  ```
170
  @inproceedings{miliani-etal-2022-neural,
 
47
 
48
  - Sentence Simplification
49
 
50
+ ### Usage
51
+
52
+ ```python
53
+ from datasets import load_dataset
54
+
55
+ dataset = load_dataset("MichaelR207/MultiSim")
56
+ ```
57
+
58
+ ### Citation
59
+ ```
60
+ @inproceedings{ryan-etal-2023-revisiting,
61
+ title = "Revisiting non-{E}nglish Text Simplification: A Unified Multilingual Benchmark",
62
+ author = "Ryan, Michael and
63
+ Naous, Tarek and
64
+ Xu, Wei",
65
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
66
+ month = jul,
67
+ year = "2023",
68
+ address = "Toronto, Canada",
69
+ publisher = "Association for Computational Linguistics",
70
+ url = "https://aclanthology.org/2023.acl-long.269",
71
+ pages = "4898--4927",
72
+ abstract = "Recent advancements in high-quality, large-scale English resources have pushed the frontier of English Automatic Text Simplification (ATS) research. However, less work has been done on multilingual text simplification due to the lack of a diverse evaluation benchmark that covers complex-simple sentence pairs in many languages. This paper introduces the MultiSim benchmark, a collection of 27 resources in 12 distinct languages containing over 1.7 million complex-simple sentence pairs. This benchmark will encourage research in developing more effective multilingual text simplification models and evaluation metrics. Our experiments using MultiSim with pre-trained multilingual language models reveal exciting performance improvements from multilingual training in non-English settings. We observe strong performance from Russian in zero-shot cross-lingual transfer to low-resource languages. We further show that few-shot prompting with BLOOM-176b achieves comparable quality to reference simplifications outperforming fine-tuned models in most languages. We validate these findings through human evaluation.",
73
+ }
74
+ ```
75
+
76
+ ### Contact
77
+
78
+ **Michael Ryan**: [Scholar](https://scholar.google.com/citations?user=8APGEEkAAAAJ&hl=en) | [Twitter](http://twitter.com/michaelryan207) | [Github](https://github.com/XenonMolecule) | [LinkedIn](https://www.linkedin.com/in/michael-ryan-207/) | [Research Gate](https://www.researchgate.net/profile/Michael-Ryan-86) | [Personal Website](http://michaelryan.tech/) | [michaeljryan@gatech.edu](mailto://michaeljryan@stanford.edu)
79
+
80
  ### Languages
81
 
82
  - English
 
195
 
196
  ### Citation Information
197
 
198
+ Please cite the individual datasets that you use within the MultiSim benchmark as appropriate. Proper bibtex attributions for each of the datasets are included below.
199
+
200
  #### AdminIT
201
  ```
202
  @inproceedings{miliani-etal-2022-neural,