Henrychur commited on
Commit
7e3fc90
1 Parent(s): ffb0697

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - medical
6
+ ---
7
+ # MMedS-Bench
8
+ [💻Github Repo](https://github.com/MAGIC-AI4Med/MedS-Ins) [🖨️arXiv Paper](https://arxiv.org/abs/2408.12547)
9
+
10
+ The official benchmark for "Towards Evaluating and Building Versatile Large Language Models for Medicine".
11
+
12
+
13
+
14
+ ## Introduction
15
+ MedS-Bench is a comprehensive benchmark designed to assess the performance of various large language models (LLMs) in clinical settings. It extends beyond traditional multiple-choice questions to include a wider range of medical tasks, providing a robust framework for evaluating LLM capabilities in healthcare.
16
+
17
+ The benchmark is structured around 11 high-level clinical task categories, each derived from a collection of 28 existing datasets. These datasets have been reformatted into an instruction-prompted question-answering format, which includes hand-crafted task definitions to guide the LLM in generating responses. The categories included in MedS-Bench are diverse and cover essential aspects of clinical decision-making and data handling:
18
+
19
+ - Multi-choice Question Answering: Tests the ability of LLMs to select correct answers from multiple options based on clinical knowledge.
20
+ - Text Summarization: Assesses the capability to concisely summarize medical texts.
21
+ - Information Extraction: Evaluates how effectively an LLM can identify and extract relevant information from complex medical documents.
22
+ - Explanation and Rationale: Requires the model to provide detailed explanations or justifications for clinical decisions or data.
23
+ - Named Entity Recognition: Focuses on the ability to detect and classify entities within a medical text.
24
+ - Diagnosis: Tests diagnostic skills, requiring the LLM to identify diseases or conditions from symptoms and case histories.
25
+ - Treatment Planning: Involves generating appropriate treatment plans based on patient information.
26
+ - Clinical Outcome Prediction: Assesses the ability to predict patient outcomes based on clinical data.
27
+ - Text Classification: Involves categorizing text into predefined medical categories.
28
+ - Fact Verification: Tests the ability to verify the accuracy of medical facts.
29
+ - Natural Language Inference: Requires deducing logical relationships from medical text.
30
+
31
+ Notably, as the evaluation involves commercial models, for example, GPT-4 and Claude 3.5, it is extremely costly to adopt the original large-scale test split. **Therefore, for some benchmarks, we randomly sampling a number of test cases.** The cases used to reeproduce the results in the paper are in [MedS-Bench-SPLIT](https://huggingface.co/datasets/Henrychur/MedS-Bench-SPLIT). For more details, please refer to our paper。
32
+
33
+ ## Data Format
34
+ The data format is the same as [MedS-Ins](https://huggingface.co/datasets/Henrychur/MedS-Ins).
35
+ ```bash
36
+ {
37
+ "Contributors": [""],
38
+ "Source": [""],
39
+ "URL": [""],
40
+ "Categories": [""],
41
+ "Reasoning": [""],
42
+ "Definition": [""],
43
+ "Input_language": [""],
44
+ "Output_language": [""],
45
+ "Instruction_language": [""],
46
+ "Domains": [""],
47
+ "Positive Examples": [ { "input": "", "output": "", "explanation": ""} ],
48
+ "Negative Examples": [ { "input": "", "output": "", "explanation": ""} ],
49
+ "Instances": [ { "id": "", "input": "", "output": [""]} ],
50
+ }
51
+ ```
52
+