Transformers
Safetensors
English
Inference Endpoints
File size: 8,595 Bytes
ae6f28b
 
4c4a664
 
 
 
ae6f28b
 
4c4a664
ae6f28b
 
 
4c4a664
ae6f28b
 
 
 
 
 
 
 
4c4a664
ae6f28b
0f173ec
ae6f28b
 
0f173ec
 
ae6f28b
0f173ec
ae6f28b
 
 
 
 
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
4c4a664
ae6f28b
 
 
 
 
d7dcc26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f173ec
d7dcc26
0f173ec
d7dcc26
 
ae6f28b
 
 
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
0f173ec
ae6f28b
 
 
 
0f173ec
4c4a664
 
 
0f173ec
ae6f28b
0f173ec
ae6f28b
 
 
0f173ec
ae6f28b
 
 
 
 
 
 
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
0f173ec
ae6f28b
 
 
0f173ec
ae6f28b
0f173ec
ae6f28b
 
 
0f173ec
ae6f28b
 
 
 
 
 
 
0f173ec
 
 
d7dcc26
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
 
 
0f173ec
ae6f28b
 
 
0f173ec
ae6f28b
0f173ec
ae6f28b
 
 
4c4a664
0f173ec
ae6f28b
0f173ec
ae6f28b
0f173ec
ae6f28b
0f173ec
ae6f28b
0f173ec
ae6f28b
 
 
0f173ec
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
---
library_name: transformers
datasets:
- 9rofe/patient_handout_AAFP_reading_levels
language:
- en
---

# Model Card for AI-Driven Health Literacy Simplification Model

<!-- Provide a quick summary of what the model is/does. -->

This model simplifies complex medical texts to a 6th-grade reading level, enhancing health literacy among patients with low health literacy.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model uses advanced natural language processing (NLP) algorithms to translate complex medical information into a format that is accessible to individuals with a 6th-grade reading level. The goal is to improve comprehension and health outcomes for patients with low health literacy.

- **Developed by:** WernickeAI
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Text Simplification
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model:** tiiuae/falcon-40b

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

The model can be used directly to simplify patient education materials to improve accessibility and comprehension.

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

The model can be integrated into healthcare platforms and patient portals to provide simplified information, aiding patients in understanding their medical conditions and treatment plans.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

The model should not be used for generating medical advice or instructions without proper validation from healthcare professionals to avoid misinformation.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

The model may not fully capture all nuances of medical information, leading to oversimplification or loss of critical details. There is also a risk of bias in the training data affecting the output.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should validate the simplified text with healthcare professionals to ensure accuracy and completeness of the information.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import (
    AutoConfig,
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig
)

from peft import PeftConfig

MODEL = "9rofe/Wernicke-AI3"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

config = PeftConfig.from_pretrained(MODEL)
model = AutoModelForCausalLM.from_pretrained(
    config.base_model_name_or_path,
    return_dict=True,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True
)

tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token

model = PeftModel.from_pretrained(model, MODEL)

generation_config = model.generation_config
generation_config.max_new_tokens = 500 # MODIFY
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id

%%time
device = "cuda:0"

prompt = """
<user>: Convert this text to reading level 6: {TEXT}
<assistant>:
""".strip()

encoding = tokenizer(prompt, return_tensors="pt").to(device)
with torch.inference_mode():
  outputs = model.generate(
      input_ids = encoding.input_ids,
      attention_mask = encoding.attention_mask,
      generation_config = generation_config
  )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

Utilize this prompt:

```python
prompt = """
<user>: Convert this text to reading level 6: {text}
<assistant>:
""".strip()
```

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

The model was trained on a comprehensive dataset of medical texts, including patient handouts and educational materials, processed to ensure readability compliance with NIH and AMA guidelines.

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing

Medical texts were preprocessed using readability assessments such as SMOG, Flesch-Kincaid, and Gunning Fog to ensure the dataset's appropriateness for training the simplification model.


#### Training Hyperparameters

- **Training regime:**
Training regime: fp16 mixed precision
Optimizer: AdamW
Learning rate: 5e-5
Batch size: 32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

Training was conducted over 10 epochs, with checkpoints saved at regular intervals to monitor progress and performance.

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

The testing data comprised patient-centered materials not included in the training set, evaluated for readability and comprehension improvement.

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

Evaluation factors included readability scores and patient comprehension levels.

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

Metrics included SMOG, Flesch-Kincaid, and Gunning Fog scores, along with patient comprehension assessment through usability testing.

### Results

The model demonstrated significant improvement in readability scores and patient comprehension compared to existing AI technologies.

#### Summary

The AI-driven tool effectively simplified medical texts to a 6th-grade reading level, enhancing understanding and engagement among patients with low health literacy.

## Model Examination

<!-- Relevant interpretability work for the model goes here -->

The model's outputs were reviewed by healthcare professionals to ensure accuracy and completeness.

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** GPU (NVIDIA A100)
- **Hours used:** 120 hours
- **Cloud Provider:** AWS
- **Compute Region:** US West (Utah)
- **Carbon Emitted:** 500 kg CO2eq

## Technical Specifications [optional]

### Model Architecture and Objective

The model is based on a sequence-to-sequence transformer architecture fine-tuned for text simplification.

### Compute Infrastructure

#### Hardware

Training was conducted on NVIDIA A100 GPUs.

#### Software

The model was developed on Google Colab using Python and Hugging Face's Transformers library.

## Glossary

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

Health Literacy: The ability to obtain, process, and understand basic health information to make appropriate health decisions.
Readability Assessments: Tools used to evaluate the reading level of a text, such as SMOG, Flesch-Kincaid, and Gunning Fog.

## More Information

For further details and inquiries, please contact the model author.

## Model Card Authors

Clark Parry

## Model Card Contact

Visit [website] for business inquiries.
Contact author for model inquiries.