Training in progress, step 500
Browse files- .gitattributes +1 -0
- README.md +35 -14
- model.safetensors +1 -1
- preprocessor_config.json +25 -0
- runs/May16_06-43-43_d6750c3beba1/events.out.tfevents.1715841825.d6750c3beba1.596.1 +3 -0
- tokenizer.json +3 -0
- training_args.bin +1 -1
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,33 +1,54 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
-
base_model: microsoft/layoutxlm-base
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
6 |
datasets:
|
7 |
-
-
|
|
|
8 |
model-index:
|
9 |
- name: layoutxlm-finetuned-xfund-fr
|
10 |
results: []
|
11 |
---
|
12 |
|
13 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
-
should probably proofread and complete it, then remove this comment. -->
|
15 |
-
|
16 |
# layoutxlm-finetuned-xfund-fr
|
17 |
|
18 |
-
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
|
|
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Intended uses & limitations
|
25 |
|
26 |
-
|
27 |
|
28 |
## Training and evaluation data
|
29 |
|
30 |
-
|
31 |
|
32 |
## Training procedure
|
33 |
|
@@ -49,7 +70,7 @@ The following hyperparameters were used during training:
|
|
49 |
|
50 |
### Framework versions
|
51 |
|
52 |
-
- Transformers 4.
|
53 |
-
- Pytorch
|
54 |
-
- Datasets 2.
|
55 |
-
- Tokenizers 0.
|
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
+
base_model: microsoft/layoutxlm-base
|
6 |
datasets:
|
7 |
+
- nielsr/XFUN
|
8 |
+
inference: false
|
9 |
model-index:
|
10 |
- name: layoutxlm-finetuned-xfund-fr
|
11 |
results: []
|
12 |
---
|
13 |
|
|
|
|
|
|
|
14 |
# layoutxlm-finetuned-xfund-fr
|
15 |
|
16 |
+
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the [XFUND](https://github.com/doc-analysis/XFUND) dataset (French split).
|
17 |
+
|
18 |
+
## Model usage
|
19 |
+
|
20 |
+
Note that this model requires Tesseract, French package, in order to perform inference. You can install it using `!sudo apt-get install tesseract-ocr-fra`.
|
21 |
+
|
22 |
+
Here's how to use this model:
|
23 |
+
|
24 |
+
```
|
25 |
+
from transformers import AutoProcessor, AutoModelForTokenClassification
|
26 |
+
import torch
|
27 |
+
from PIL import Image
|
28 |
+
|
29 |
+
processor = AutoProcessor.from_pretrained("nielsr/layoutxlm-finetuned-xfund-fr")
|
30 |
+
model = AutoModelForTokenClassification.from_pretrained(nielsr/layoutxlm-finetuned-xfund-fr")
|
31 |
+
|
32 |
+
# assuming you have a French document, turned into an image
|
33 |
+
image = Image.open("...").convert("RGB")
|
34 |
|
35 |
+
# prepare for the model
|
36 |
+
encoding = processor(image, padding="max_length", max_length=512, truncation=True, return_tensors="pt")
|
37 |
|
38 |
+
with torch.no_grad():
|
39 |
+
outputs = model(**encoding)
|
40 |
+
logits = outputs.logits
|
41 |
+
|
42 |
+
predictions = logits.argmax(-1)
|
43 |
+
```
|
44 |
|
45 |
## Intended uses & limitations
|
46 |
|
47 |
+
This model can be used for NER on French scanned documents. It can recognize 4 categories: "question", "answer", "header" and "other".
|
48 |
|
49 |
## Training and evaluation data
|
50 |
|
51 |
+
This checkpoint used the French portion of the multilingual [XFUND](https://github.com/doc-analysis/XFUND) dataset.
|
52 |
|
53 |
## Training procedure
|
54 |
|
|
|
70 |
|
71 |
### Framework versions
|
72 |
|
73 |
+
- Transformers 4.22.1
|
74 |
+
- Pytorch 1.10.0+cu111
|
75 |
+
- Datasets 2.4.0
|
76 |
+
- Tokenizers 0.12.1
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1476349244
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ca1e560127891811eea0e71b11da7c15d4e379fd965a1ed32fe9d9adce66a35b
|
3 |
size 1476349244
|
preprocessor_config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_valid_processor_keys": [
|
3 |
+
"images",
|
4 |
+
"do_resize",
|
5 |
+
"size",
|
6 |
+
"resample",
|
7 |
+
"apply_ocr",
|
8 |
+
"ocr_lang",
|
9 |
+
"tesseract_config",
|
10 |
+
"return_tensors",
|
11 |
+
"data_format",
|
12 |
+
"input_data_format"
|
13 |
+
],
|
14 |
+
"apply_ocr": true,
|
15 |
+
"do_resize": true,
|
16 |
+
"image_processor_type": "LayoutLMv2FeatureExtractor",
|
17 |
+
"ocr_lang": "fra",
|
18 |
+
"processor_class": "LayoutXLMProcessor",
|
19 |
+
"resample": 2,
|
20 |
+
"size": {
|
21 |
+
"height": 224,
|
22 |
+
"width": 224
|
23 |
+
},
|
24 |
+
"tesseract_config": ""
|
25 |
+
}
|
runs/May16_06-43-43_d6750c3beba1/events.out.tfevents.1715841825.d6750c3beba1.596.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b17df32fefdde1cb563c2dc77942a96d3d399a42f87eb40dc4de482cb172548a
|
3 |
+
size 7332
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3ffb37461c391f096759f4a9bbbc329da0f36952f88bab061fcf84940c022e98
|
3 |
+
size 17082999
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4984
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:46935ac7195f9c6f196a629b7ffd5f74e49c4ffb2e53e7829be94e22fa9a07c3
|
3 |
size 4984
|