Update README.md
Browse files
README.md
CHANGED
@@ -1,41 +1,94 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
-
|
18 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:**
|
21 |
-
- **Funded by
|
22 |
-
- **Shared by
|
23 |
-
- **Model type:**
|
24 |
-
- **
|
25 |
-
- **
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
- **Paper [optional]:**
|
34 |
-
- **Demo [optional]:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
### Direct Use
|
41 |
|
@@ -57,9 +110,12 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
|
|
|
|
|
|
63 |
|
64 |
### Recommendations
|
65 |
|
@@ -77,7 +133,10 @@ Use the code below to get started with the model.
|
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
|
|
|
|
|
|
81 |
|
82 |
[More Information Needed]
|
83 |
|
@@ -154,7 +213,10 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
|
|
|
|
|
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
@@ -172,6 +234,8 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
172 |
|
173 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
|
|
|
|
175 |
**BibTeX:**
|
176 |
|
177 |
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
+
paligemma-3b-mix-448-med_30k-ct-brain is based on lightweight google PaliGemma vision-language model (VLM) fine-tuned to perform a Brain CT Image caption task, visual question answering, text reading and object detection.
|
|
|
|
|
9 |
|
10 |
## Model Details
|
11 |
|
12 |
### Model Description
|
13 |
|
|
|
|
|
14 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
15 |
|
16 |
+
- **Developed by:** mychen76@gmail.com
|
17 |
+
- **Funded by :** N/A
|
18 |
+
- **Shared by :** mychen76@gmail.com
|
19 |
+
- **Model type:** Visual Language Model
|
20 |
+
- **License:** Apache 2.0
|
21 |
+
- **Finetuned from model [optional]:** google/paligemma-3b-mix-448
|
|
|
22 |
|
23 |
### Model Sources [optional]
|
24 |
|
25 |
<!-- Provide the basic links for the model. -->
|
26 |
|
27 |
+
- **Repository:** TBD
|
28 |
+
- **Paper [optional]:** TBD
|
29 |
+
- **Demo [optional]:** TBD
|
30 |
+
|
31 |
+
## How to Use
|
32 |
+
|
33 |
+
paligemma-3b-mix-448-med_30k-ct-brain is a single-turn vision language model not meant for conversational use, and it works best on CT-Brain image caption use case.
|
34 |
+
|
35 |
+
Input: Image and text string, such as a prompt to caption the image, or a question.
|
36 |
+
Output: Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
|
37 |
+
|
38 |
+
|
39 |
+
Use in Transformers
|
40 |
+
The following snippets use model google/paligemma-3b-mix-224 for reference purposes. The model in this repo you are now browsing may have been trained for other tasks, please make sure you use appropriate inputs for the task at hand.
|
41 |
+
|
42 |
+
***Running the default precision (float32) on CPU***
|
43 |
+
```
|
44 |
+
from PIL import Image
|
45 |
+
import requests
|
46 |
+
import torch
|
47 |
+
from transformers import AutoTokenizer, PaliGemmaForConditionalGeneration, PaliGemmaProcessor
|
48 |
+
from transformers import AutoProcessor
|
49 |
+
|
50 |
+
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
|
51 |
+
dtype = torch.bfloat16
|
52 |
+
|
53 |
+
## input
|
54 |
+
url = "https://huggingface.co/datasets/mychen76/medtrinity_brain_30k_hf/viewer/default/train?row=4&image-viewer=image-62-2B87111BBD996B48DB4C86B0244653FF84B3B8A9"
|
55 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
56 |
|
57 |
+
## load model
|
58 |
+
FINETUNED_MODEL_ID="mychen76/paligemma-3b-mix-448-med_30k-ct-brain"
|
59 |
+
|
60 |
+
processor = AutoProcessor.from_pretrained(FINETUNED_MODEL_ID)
|
61 |
+
model = PaliGemmaForConditionalGeneration.from_pretrained(
|
62 |
+
FINETUNED_MODEL_ID,
|
63 |
+
torch_dtype=dtype,
|
64 |
+
device_map=device
|
65 |
+
).eval()
|
66 |
+
```
|
67 |
+
run inference
|
68 |
+
```
|
69 |
+
# Instruct the model to create a caption in Spanish
|
70 |
+
def run_inference(input_text,input_image, model, processor,max_tokens=1024):
|
71 |
+
inputs = processor(text=input_text, images=input_image,
|
72 |
+
padding="longest", do_convert_rgb=True, return_tensors="pt").to("cuda")
|
73 |
+
model.to(device)
|
74 |
+
inputs = inputs.to(dtype=model.dtype)
|
75 |
+
|
76 |
+
with torch.no_grad():
|
77 |
+
output = model.generate(**inputs, max_new_tokens=max_tokens,num_beams=3,do_sample=False)
|
78 |
+
|
79 |
+
pred_text=processor.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
80 |
+
return pred_text
|
81 |
+
|
82 |
+
|
83 |
+
input_text="caption"
|
84 |
+
pred_text = run_inference(input_text,input_image,model, processor)
|
85 |
+
print(pred_text)
|
86 |
+
```
|
87 |
+
result
|
88 |
+
```
|
89 |
+
The image is a CT scan of the brain, showing various brain structures without the presence of medical devices. The region of interest, located centrally and in the middle of the image, occupies approximately 3.0% of the area and appears to have an abnormal texture or density compared to the surrounding brain tissue, which may indicate a pathological condition. This abnormal area could be related to the surrounding brain structures, potentially affecting them or being affected by a shared pathological process, such as a hemorrhage or a mass effect.
|
90 |
+
```
|
91 |
|
|
|
92 |
|
93 |
### Direct Use
|
94 |
|
|
|
110 |
|
111 |
## Bias, Risks, and Limitations
|
112 |
|
113 |
+
Most limitations inherited from the underlying Gemma model still apply:
|
114 |
|
115 |
+
VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging.
|
116 |
+
Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language.
|
117 |
+
VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
|
118 |
+
VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations.
|
119 |
|
120 |
### Recommendations
|
121 |
|
|
|
133 |
|
134 |
### Training Data
|
135 |
|
136 |
+
using dataset: mychen76/medtrinity_brain_30k_hf
|
137 |
+
|
138 |
+
a subset of data from UCSC-VLAA/MedTrinity-25M
|
139 |
+
|
140 |
|
141 |
[More Information Needed]
|
142 |
|
|
|
213 |
|
214 |
### Model Architecture and Objective
|
215 |
|
216 |
+
PaliGemma is the composition of a Transformer decoder and a Vision Transformer image encoder, with a total of 3 billion params.
|
217 |
+
The text decoder is initialized from Gemma-2B. The image encoder is initialized from SigLIP-So400m/14.
|
218 |
+
aliGemma is trained following the PaLI-3 recipes.
|
219 |
+
|
220 |
|
221 |
### Compute Infrastructure
|
222 |
|
|
|
234 |
|
235 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
236 |
|
237 |
+
UCSC-VLAA/MedTrinity-25M
|
238 |
+
|
239 |
**BibTeX:**
|
240 |
|
241 |
[More Information Needed]
|