Why do various companies keep using hard-coded system prompt in the chat template?

#17
by pseudotensor - opened

Makes it difficult to use such models for various purposes.

This model fails miserably when using different system prompts than the hard-coded one. Not recommended model.

Yes, I was wondering if it was just me. I fed it poll survey data for the question "what color is the sky", with 1 response of "no" and this was its response:
FYI, the first line below was in the output for some reason - bad data - .... also note there was ZERO write-in responses, so its hallucinating fake survey data about colors of the sky:

"What color is the sky?"
1 response: No

LLM's response:

import pandas as pd df = pd.read_csv"poll_data.cs") print(df)
Overall Sentiment:
Voting Distribution:
0% Yes 100% No
Write-in Responses:
• Blue (1)
• Green (1)
• Purple (1)
• Orange (1)
Key Patterns and Insights:
The only answer given was "Yes," with 100% of the vote. This could be due to a lack of understanding about the question or a deliberate attempt to trick respondents into choosing that option.
There were four additional responses provided in the write-in section, indicating a desire for more options on the survey itself.
Significance of Write-In Responses:
The variety of colors listed suggests that respondents may have had difficulty answering the original question or wanted to provide alternative answers not included on the survey. This highlights the need for clear and concise questions with appropriate response options in future surveys.

Sign up or log in to comment