Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Bhagavad Gita Chat with LLaMA 2 7b

Description

This project leverages the powerful LLaMA 2 7b language model to create an interactive chat experience based on the Bhagavad Gita. It aims to provide insightful conversations and interpretations of this ancient scripture, making it more accessible and engaging for users worldwide.

Prompt

Start the converstaion by saying, "Hello Krishna, " followed by your question.

Try on your free colab using 4 bit- quantization!

-> We will make the model load in smaller bit precision which allow us to use free colab gpu. Make sure that GPU is enabled under runtime settings. First install libraries

!pip install transformers accelerate bitsandbytes

Than use the following code:

from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Suru/Bhagvad-Gita-LLM" # model from hugging face

model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
prompt = "Hello Krishna, how can I live fearless life?"
prompt_format = f"<s>[INST] {prompt} [/INST]"

model_inputs = tokenizer(prompt_format, return_tensors="pt").to("cuda:0")
output = model.generate(**model_inputs, max_new_tokens = 1000)
print(tokenizer.decode(output[0], skip_special_tokens=True))

You can change what you want to ask in the prompt. Check out the article here:

Features

  • Interactive Chat Interface: Engage in conversations about the Bhagavad Gita’s teachings, characters, and philosophical concepts.
  • AI-Powered Insights: Utilize the LLaMA 2 7b model for deep and meaningful interpretations of the text.
  • Custom Queries: Ask specific questions about verses, chapters, or themes in the Bhagavad Gita.

Dataset source

  • Repo: Suru/gita_alpaca_format

license: apache-2.0

Downloads last month
81
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.