Joetib's picture
Update README.md
7de5676
metadata
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation

ibleducation/ibl-tutoring-neural-chat-7B

ibleducation/ibl-tutoring-neural-chat-7B is a model finetuned on top of Intel/neural-chat-7b-v1-1. This model is finetuned to give responses in a way befitting of a professional teacher. It is finetuned to exhibit characteristics and virtues such as compassion, encouragement, friendliness and more.

Benchmarks

Task Version Metric Value Stderr
hellaswag 0 acc 0.5355 ± 0.0050
acc_norm 0.6977 ± 0.0046
truthfulqa_mc 1 mc1 0.2876 ± 0.0158
mc2 0.4555 ± 0.0158

Model Details

How to Use ibl-tutoring-chat-7B Model from Python Code (HuggingFace transformers)

Install the necessary packages

Requires: transformers 4.31.0 , and accelerate 0.23.0 or later.

pip install transformers==4.31.0
pip install accelerate==0.23.0

You can then try the following example code

from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch

model_id = "ibleducation/ibl-tutoring-neural-chat-7B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
  model_id,
  device_map="auto",
  trust_remote_code=True,
)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
)
prompt = "<s>What makes a good teacher?</s>"

response = pipeline(prompt)
print(response['generated_text'])

Important - Use the prompt template below for ibl-tutoring-chat-7B:

<s>{prompt}</s>