|
--- |
|
library_name: transformers |
|
datasets: |
|
- 9rofe/patient_handout_AAFP_reading_levels |
|
language: |
|
- en |
|
--- |
|
|
|
# Model Card for AI-Driven Health Literacy Simplification Model |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model simplifies complex medical texts to a 6th-grade reading level, enhancing health literacy among patients with low health literacy. |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This model uses advanced natural language processing (NLP) algorithms to translate complex medical information into a format that is accessible to individuals with a 6th-grade reading level. The goal is to improve comprehension and health outcomes for patients with low health literacy. |
|
|
|
- **Developed by:** WernickeAI |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Model type:** Text Simplification |
|
- **Language(s) (NLP):** English |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model:** tiiuae/falcon-40b |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
The model can be used directly to simplify patient education materials to improve accessibility and comprehension. |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
The model can be integrated into healthcare platforms and patient portals to provide simplified information, aiding patients in understanding their medical conditions and treatment plans. |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
The model should not be used for generating medical advice or instructions without proper validation from healthcare professionals to avoid misinformation. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
The model may not fully capture all nuances of medical information, leading to oversimplification or loss of critical details. There is also a risk of bias in the training data affecting the output. |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users should validate the simplified text with healthcare professionals to ensure accuracy and completeness of the information. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
```python |
|
from transformers import ( |
|
AutoConfig, |
|
AutoModelForCausalLM, |
|
AutoTokenizer, |
|
BitsAndBytesConfig |
|
) |
|
|
|
from peft import PeftConfig |
|
|
|
MODEL = "9rofe/Wernicke-AI3" |
|
|
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch.bfloat16 |
|
) |
|
|
|
config = PeftConfig.from_pretrained(MODEL) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
config.base_model_name_or_path, |
|
return_dict=True, |
|
quantization_config=bnb_config, |
|
device_map="auto", |
|
trust_remote_code=True |
|
) |
|
|
|
tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
|
|
model = PeftModel.from_pretrained(model, MODEL) |
|
|
|
generation_config = model.generation_config |
|
generation_config.max_new_tokens = 500 # MODIFY |
|
generation_config.temperature = 0.7 |
|
generation_config.top_p = 0.7 |
|
generation_config.num_return_sequences = 1 |
|
generation_config.pad_token_id = tokenizer.eos_token_id |
|
generation_config.eos_token_id = tokenizer.eos_token_id |
|
|
|
%%time |
|
device = "cuda:0" |
|
|
|
prompt = """ |
|
<user>: Convert this text to reading level 6: {TEXT} |
|
<assistant>: |
|
""".strip() |
|
|
|
encoding = tokenizer(prompt, return_tensors="pt").to(device) |
|
with torch.inference_mode(): |
|
outputs = model.generate( |
|
input_ids = encoding.input_ids, |
|
attention_mask = encoding.attention_mask, |
|
generation_config = generation_config |
|
) |
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
Utilize this prompt: |
|
|
|
```python |
|
prompt = """ |
|
<user>: Convert this text to reading level 6: {text} |
|
<assistant>: |
|
""".strip() |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
The model was trained on a comprehensive dataset of medical texts, including patient handouts and educational materials, processed to ensure readability compliance with NIH and AMA guidelines. |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Preprocessing |
|
|
|
Medical texts were preprocessed using readability assessments such as SMOG, Flesch-Kincaid, and Gunning Fog to ensure the dataset's appropriateness for training the simplification model. |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** |
|
Training regime: fp16 mixed precision |
|
Optimizer: AdamW |
|
Learning rate: 5e-5 |
|
Batch size: 32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
#### Speeds, Sizes, Times |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
Training was conducted over 10 epochs, with checkpoints saved at regular intervals to monitor progress and performance. |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
The testing data comprised patient-centered materials not included in the training set, evaluated for readability and comprehension improvement. |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
Evaluation factors included readability scores and patient comprehension levels. |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
Metrics included SMOG, Flesch-Kincaid, and Gunning Fog scores, along with patient comprehension assessment through usability testing. |
|
|
|
### Results |
|
|
|
The model demonstrated significant improvement in readability scores and patient comprehension compared to existing AI technologies. |
|
|
|
#### Summary |
|
|
|
The AI-driven tool effectively simplified medical texts to a 6th-grade reading level, enhancing understanding and engagement among patients with low health literacy. |
|
|
|
## Model Examination |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
The model's outputs were reviewed by healthcare professionals to ensure accuracy and completeness. |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** GPU (NVIDIA A100) |
|
- **Hours used:** 120 hours |
|
- **Cloud Provider:** AWS |
|
- **Compute Region:** US West (Utah) |
|
- **Carbon Emitted:** 500 kg CO2eq |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
The model is based on a sequence-to-sequence transformer architecture fine-tuned for text simplification. |
|
|
|
### Compute Infrastructure |
|
|
|
#### Hardware |
|
|
|
Training was conducted on NVIDIA A100 GPUs. |
|
|
|
#### Software |
|
|
|
The model was developed on Google Colab using Python and Hugging Face's Transformers library. |
|
|
|
## Glossary |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
Health Literacy: The ability to obtain, process, and understand basic health information to make appropriate health decisions. |
|
Readability Assessments: Tools used to evaluate the reading level of a text, such as SMOG, Flesch-Kincaid, and Gunning Fog. |
|
|
|
## More Information |
|
|
|
For further details and inquiries, please contact the model author. |
|
|
|
## Model Card Authors |
|
|
|
Clark Parry |
|
|
|
## Model Card Contact |
|
|
|
Visit [website] for business inquiries. |
|
Contact author for model inquiries. |