SahandSab commited on
Commit
9cc0ce2
1 Parent(s): 2135a69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -11
README.md CHANGED
@@ -19,11 +19,7 @@ size_categories:
19
  ---
20
 
21
  # EmoBench
22
- > This is the official repository for our paper ["EmoBench: Evaluating the Emotional Intelligence of Large Language Models"](https://arxiv.org/abs/2402.12071)
23
-
24
- <img src="https://img.shields.io/badge/Venue-ACL--24-278ea5" alt="venue"/> <img src="https://img.shields.io/badge/Status-Under Review-success" alt="status"/> <img src="https://img.shields.io/badge/Contributions-Welcome-red"> <img src="https://img.shields.io/badge/Last%20Updated-2024--03--11-2D333B" alt="update"/>
25
-
26
- ![EmoBench](EmoBench.jpg)
27
 
28
  ## Overview
29
 
@@ -34,6 +30,8 @@ The dataset includes **400 hand-crafted scenarios** in English and Chinese, stru
34
  - **Emotional Understanding (EU):** Recognizing emotions and their causes in complex scenarios.
35
  - **Emotional Application (EA):** Recommending effective emotional responses or actions in emotionally charged dilemmas.
36
 
 
 
37
  ## Key Features
38
 
39
  - **Psychology-based Design:** Grounded in established theories of Emotional Intelligence (e.g., Salovey & Mayer, Goleman).
@@ -63,9 +61,28 @@ For code regarding evaluation, please visit [our repository](https://github.com/
63
  ## Citation
64
  If you find our work useful for your research, please kindly cite our paper as follows:
65
  ```
66
- @article{EmoBench2024,
67
- title={EmoBench: Evaluating the Emotional Intelligence of Large Language Models},
68
- author={Sahand Sabour and Siyang Liu and Zheyuan Zhang and June M. Liu and Jinfeng Zhou and Alvionna S. Sunaryo and Juanzi Li and Tatia M. C. Lee and Rada Mihalcea and Minlie Huang},
69
- year={2024},
70
- eprint={2402.12071},
71
- archivePrefix={arXiv},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
20
 
21
  # EmoBench
22
+ > This is the official repository for our ACL 2024 paper ["EmoBench: Evaluating the Emotional Intelligence of Large Language Models"](https://arxiv.org/abs/2402.12071)
 
 
 
 
23
 
24
  ## Overview
25
 
 
30
  - **Emotional Understanding (EU):** Recognizing emotions and their causes in complex scenarios.
31
  - **Emotional Application (EA):** Recommending effective emotional responses or actions in emotionally charged dilemmas.
32
 
33
+ ![EmoBench](EmoBench.jpg)
34
+
35
  ## Key Features
36
 
37
  - **Psychology-based Design:** Grounded in established theories of Emotional Intelligence (e.g., Salovey & Mayer, Goleman).
 
61
  ## Citation
62
  If you find our work useful for your research, please kindly cite our paper as follows:
63
  ```
64
+ @inproceedings{sabour-etal-2024-emobench,
65
+ title = "{E}mo{B}ench: Evaluating the Emotional Intelligence of Large Language Models",
66
+ author = "Sabour, Sahand and
67
+ Liu, Siyang and
68
+ Zhang, Zheyuan and
69
+ Liu, June and
70
+ Zhou, Jinfeng and
71
+ Sunaryo, Alvionna and
72
+ Lee, Tatia and
73
+ Mihalcea, Rada and
74
+ Huang, Minlie",
75
+ editor = "Ku, Lun-Wei and
76
+ Martins, Andre and
77
+ Srikumar, Vivek",
78
+ booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
79
+ month = aug,
80
+ year = "2024",
81
+ address = "Bangkok, Thailand",
82
+ publisher = "Association for Computational Linguistics",
83
+ url = "https://aclanthology.org/2024.acl-long.326",
84
+ doi = "10.18653/v1/2024.acl-long.326",
85
+ pages = "5986--6004",
86
+ abstract = "Recent advances in Large Language Models (LLMs) have highlighted the need for robust, comprehensive, and challenging benchmarks. Yet, research on evaluating their Emotional Intelligence (EI) is considerably limited. Existing benchmarks have two major shortcomings: first, they mainly focus on emotion recognition, neglecting essential EI capabilities such as emotion management and thought facilitation through emotion understanding; second, they are primarily constructed from existing datasets, which include frequent patterns, explicit information, and annotation errors, leading to unreliable evaluation. We propose EmoBench, a benchmark that draws upon established psychological theories and proposes a comprehensive definition for machine EI, including Emotional Understanding and Emotional Application. EmoBench includes a set of 400 hand-crafted questions in English and Chinese, which are meticulously designed to require thorough reasoning and understanding. Our findings reveal a considerable gap between the EI of existing LLMs and the average human, highlighting a promising direction for future research. Our code and data are publicly available at https://github.com/Sahandfer/EmoBench.",
87
+ }
88
+ ```