Variable answer is getting predicted for same prompt
#79
by
sjainlucky
- opened
I am trying to extract information from a document, but whenever i am trying to run the same prompt on llama 3.1 for multiple times, it is getting differnet kinds of result. How can that scenario be handled to reduce the variability in model answer.
I have set temperature value to zero and top_p value to 0.99 but still the unpredictability of answer is still remaining