kuri54 commited on
Commit
0e4655b
·
verified ·
1 Parent(s): ed3adc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ ---
4
+ ### Introduction
5
+
6
+ This model is based on the research described in the paper titled "Enhancing Cervical Cancer Cytology Screening via Artificial Intelligence Innovation". The research discusses how the application of advanced AI techniques can significantly improve the accuracy and efficiency of cervical cancer screening, offering a more scalable and cost-effective solution compared to traditional methods. Specifically, this model focuses on classifying tile images from LBC cytology specimens at low magnification (x10), distinguishing between normal and abnormal categories, which is a departure from the high-magnification or single-cell focused approaches commonly used in cytology.
7
+
8
+ ### Model Description
9
+
10
+ - **Paper**: https://www.nature.com/articles/s41598-024-70670-6
11
+
12
+ - **Repository**: https://github.com/kuri54/GynAIe
13
+
14
+ - **License**: CC-BY-NC-SA-4.0
15
+
16
+ ### Training Details
17
+
18
+ - **Total Images**: 8000
19
+
20
+ - **Normal Images**: 4000
21
+
22
+ - **Abnormal Images**: 4000
23
+
24
+ - LSIL: 1000
25
+ - HSIL: 1000
26
+ - SCC: 1000
27
+ - ADC: 1000
28
+
29
+ - **Magnification Level**: x10
30
+
31
+ ### Usage
32
+
33
+ This model is not intended to be used in isolation. To fully utilize its capabilities and implement the techniques developed, please refer to the accompanying code available on our [GitHub repository](https://github.com/kuri54/GynAIe). The code provides necessary details on how to effectively use the model in your applications.
34
+
35
+ For full documentation, example scripts, and more details, visit our GitHub repository.
36
+
37
+ ```python
38
+ from PIL import Image
39
+ from transformers import CLIPModel, CLIPProcessor
40
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
41
+ model = CLIPModel.from_pretrained(kuri54/GynAIe-B16-8k)
42
+ processor = CLIPProcessor.from_pretrained(kuri54/GynAIe-B16-8k)
43
+ image = Image.open(path/to/image)
44
+ # normal or abnormal
45
+ text = ['a image of a normal', 'a image of a anomaly']
46
+ inputs = processor(text=text, images=image, return_tensors='pt', padding=True).to(device)
47
+ outputs = model(**inputs)
48
+ probs = outputs.logits_per_image.softmax(dim=1).cpu().detach().numpy()
49
+ predicted_class_idx = probs.argmax(-1).item()
50
+ print('Class:', labels[predicted_class_idx])
51
+ print('Score:', probs)
52
+ ```