DialoGPT-medium-SIY / README.md
Adam Bahr
First version of SIY using our updated dataset.
8b96078
|
raw
history blame
1.94 kB
metadata
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
  - conversational
license: mit

DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.

This is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue. I built a Discord AI chatbot based on this model for internal use within Siyris, Inc. Chat with the model:

from transformers import AutoTokenizer, AutoModelWithLMHead
  
tokenizer = AutoTokenizer.from_pretrained("Siyris/DialoGPT-medium-SIY")
model = AutoModelWithLMHead.from_pretrained("Siyris/DialoGPT-medium-SIY")
# Let's chat for 4 lines
for step in range(4):
    # encode the new user input, add the eos_token and return a tensor in Pytorch
    new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
    # print(new_user_input_ids)
    # append the new user input tokens to the chat history
    bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
    # generated a response while limiting the total chat history to 1000 tokens, 
    chat_history_ids = model.generate(
        bot_input_ids, max_length=200,
        pad_token_id=tokenizer.eos_token_id,  
        no_repeat_ngram_size=3,       
        do_sample=True, 
        top_k=100, 
        top_p=0.7,
        temperature=0.8
    )
    
    # pretty print last ouput tokens from bot
    print("SIY: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))