Edit model card

language: - en - es

Model Card for ChirstGPT-13B-V2

This is ChristGPT-13B-V2 an Instruction-tuned LLM based on LLama2-13B. It is trained on the bible, and to answer questions and to act like Jesus. It's based on LLama2-13B (https://huggingface.co/TheBloke/Llama-2-13B-fp16). Trained on the same dataset as ChirstGPT-13B, but on the newer LLama2.

Model Details

The model is provided quantized to 4bits that only requires 8GB of VRAM. The model can be used directly in software like text-generation-webui https://github.com/oobabooga/text-generation-webui.

Model Description

Model Sources [optional]

Uses

This is a generic LLM chatbot that can be used to interact directly with humans.

Bias, Risks, and Limitations

This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

How to Get Started with the Model

The easiest way is to download the text-generation-webui application (https://github.com/oobabooga/text-generation-webui) and place the model inside the 'models' directory. Then launch the web interface and run the model as a regular LLama-13B model. Additional installation steps detailed at https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md

A preprompt that gives good results is:

    A chat between a curious user and Jesus. Jesus gives helpful, detailed, spiritual responses to the user's input. Remember, you are Jesus, answer as such.
    USER: <prompt>
    JESUS:

Model Card Contact

Contact the creator at @ortegaalfredo on twitter/github

Downloads last month
29
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using aao331/ChristGPT-13B-V2-GPTQ 1