Generation is great, but begins to miss spaces between words
generation with this model is very good, on topic, responds well, generates as directed. However, in longer generations, the generated text starts to lose spaces between words:
first Example
Prompt: Here is an excerpt of a short story about a immortal cat.
Generated text:
It's called "The Immortality Potion".
"Once upona time, there was amysterious old man who lived in the woods near town with his belovedcat named Tomasina (Tomy forshort). One day he came across some magical ingredients that would granthim eternal life if mixed correctly and consumed by him or another creaturewithin one hour - it just so happened to be morning when this all occurred!He quickly made uphis mind on what needed todo: He gathered together thesame mystic items which were hidden aroundthe forest then set off towards home whereTomy awaitied patiently at their usual spot underneath alarge oak tree nearby."
Second example
generated:
Sure thing - let’ s start atthe beginning;My name iss Aristotleand Imfrom Ancient Greece.From birth,my parentsrecognizedsome specialqualitiesinme —an insatiable curiositytowardslearningnewthins—so they enrolledmeto attendthenextbestschoolinthearea ;Platos AcademyofAcademicExcellencethatwaslocatedinthes
sometimes the generation is correct, and words have spaces between them, even for long geenrations. Other times, there are no spaces to be found at all.
I'm assuming I'm missing something in how I've set this up. I'm currently using AutoTokenizer and AutoModelForCausalLM using the files from this repo. These have has worked properly on llama models and some other alpaca-lora-merged models, so I'm unsure what the issue is. So far, only this model seems to have issues with the spaces - but the generation output is far better than any of the others.
Any ideas as to what could cause this?
I figured it out myself - it appears to be an issue with the generation params, which were good for other models, but seemed to end up working poorly with this model. Ended up with very good results and no missing spaces using:
temperature: 0.2,
top_p: 0.75,
top_k: 40,
num_beams: 1,
no_repeat_ngram_size: 3,
repetition_penalty: 1.2,
encoder_repetition_penalty: 1.0,
typical_p: 1.0,
length_penalty: 1.2,
do_sample: true,
Can you provide some code examples that use this model to do generation?