Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ inference:
|
|
38 |
- summarizing text via arXiv models will typically make the summary sound so needlessly complicated you might as well have read the original text in that time anyway.
|
39 |
- this model is one attempt to help with that
|
40 |
- this model has been trained for 7 epochs total and is closer to finished.
|
41 |
-
- Will continue to improve based on any result findings/feedback.
|
42 |
- the starting checkpoint was `google/bigbird-pegasus-large-bigpatent`
|
43 |
|
44 |
# example usage
|
@@ -52,12 +52,12 @@ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
|
52 |
from transformers import pipeline
|
53 |
|
54 |
_model = AutoModelForSeq2SeqLM.from_pretrained(
|
55 |
-
"pszemraj/bigbird-pegasus-large-booksum
|
56 |
low_cpu_mem_usage=True,
|
57 |
)
|
58 |
|
59 |
_tokenizer = AutoTokenizer.from_pretrained(
|
60 |
-
"pszemraj/bigbird-pegasus-large-booksum
|
61 |
)
|
62 |
|
63 |
|
|
|
38 |
- summarizing text via arXiv models will typically make the summary sound so needlessly complicated you might as well have read the original text in that time anyway.
|
39 |
- this model is one attempt to help with that
|
40 |
- this model has been trained for 7 epochs total and is closer to finished.
|
41 |
+
- Will continue to improve (slowly, now that it has been trained on the dataset for 70k steps) based on any result findings/feedback.
|
42 |
- the starting checkpoint was `google/bigbird-pegasus-large-bigpatent`
|
43 |
|
44 |
# example usage
|
|
|
52 |
from transformers import pipeline
|
53 |
|
54 |
_model = AutoModelForSeq2SeqLM.from_pretrained(
|
55 |
+
"pszemraj/bigbird-pegasus-large-K-booksum",
|
56 |
low_cpu_mem_usage=True,
|
57 |
)
|
58 |
|
59 |
_tokenizer = AutoTokenizer.from_pretrained(
|
60 |
+
"pszemraj/bigbird-pegasus-large-K-booksum",
|
61 |
)
|
62 |
|
63 |
|