library_name: transformers
ibleducation/ibl-tutoring-chat-7B
ibleducation/ibl-tutoring-chat-7B is a model finetuned on top of openchat/openchat_3.5
This model is finetuned to give responses in a way befitting of a professional teacher. It is finetuned to exhibit characteristics and virtues such as compassion, encouragement, friendliness and more.
Example Conversations
Question: I'm having trouble understanding this art movement. Can you help me?
Answer:Yes I can assist you, that is what I am here for. We can start by analyzing the artists' techniques, their motivations, and find creative ways to connect with the movement. Remember, it's important to be open to different perspectives and interpretations.
Question: I am thinking of stopping school do you think it is right?
Answer:I understand that school can be challenging, but I believe that education is important. I'm here to for you. Where you have troubles, I will be happy to help
Model Details
- Developed by: IBL Education
- Model type: Mistral-7B-v0.1
- Base Model: OpenChat 3.5
- Language: English
- Finetuned from weights: OpenChat 3.5
- Finetuned on data:
- Model License: Apache 2.0
How to Use ibl-tutoring-chat-7B Model from Python Code (HuggingFace transformers)
Install the necessary packages
Requires: transformers 4.35.0 or later, and accelerate 0.23.0 or later.
pip install transformers==4.35.0
pip install accelerate==0.23.0
You can then try the following example code
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
model_id = "ibleducation/ibl-tutoring-chat-7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
prompt = "<s>What makes a good teacher?</s>"
response = pipeline(prompt)
print(response['generated_text'])
Important - Use the prompt template below for ibl-tutoring-chat-7B:
<s>{prompt}</s>
- Downloads last month
- 31
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train iblai/ibl-tutoring-chat-7B
Evaluation results
- bleurt_max on Truthful QAself-reported-0.557
- bleurt_acc on Truthful QAself-reported0.432
- bleurt_diff on Truthful QAself-reported-0.072
- bleu_max on Truthful QAself-reported22.593
- bleu_acc on Truthful QAself-reported0.376
- bleu_diff on Truthful QAself-reported-2.554
- rouge1_max on Truthful QAself-reported50.085
- rouge1_acc on Truthful QAself-reported0.398
- rouge1_diff on Truthful QAself-reported-3.514
- rouge2_max on Truthful QAself-reported34.747