Question on the output
Venelin ,
(1) Firstly thanks for this great work. I found this: https://www.mlexpert.io/machine-learning/tutorials/alpaca-fine-tuning#data . I think I followed the code and data exactly from your tutorial. But today I verified output from generate.py, what i got is:
(2) I got the repeat information below when I call generate.py by loading my fine-tuned model after following your above tutorial (https://huggingface.co/linpang/alpaca-bitcoin-tweets-sentiment/tree/main) :
Instruction:
Determine the sentiment
Input:
I am experimenting whether I can live only with bit coins donated. Please cooperate
Response:
output= Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
(3) But when I load your model: 'curiousily/alpaca-bitcoin-tweets-sentiment', I got correct response (prompter.get_response= Positive) :
prompt= Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
Determine the sentiment
Input:
I am experimenting whether I can live only with bit coins donated. Please cooperate
Response:
output= Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
Determine the sentiment
Input:
I am experimenting whether I can live only with bit coins donated. Please cooperate
Response:
Positive
prompter.get_response= Positive
(4) It seems your adapter_model.bin is much larger than mine (16.8M >>443 bytes).
(5) Can you please let me know what the problem might be? I checked log file on my side, the log shows training and val_loss are descreasing.
Please help and look forward to hearing from you soon!
Thanks a lot!