Edit model card

QuantFactory/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF

This is quantized version of aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored created using llama.cpp

Original Model Card

image/png

"transformers_version" >= "4.43.1"

Special Thanks:

Model Description:

The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.

  • Saving money(LLama 3.1)
  • only test en.
  • Input Models input text only. Output Models generate text and code only.
  • Uncensored
  • Quick response
  • A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
  • DarkIdol:Roles that you can imagine and those that you cannot imagine.
  • Roleplay
  • Specialized in various role-playing scenarios

How To

Llama 3.1 is a new model and may still experience issues such as refusals (which I have not encountered in my tests). Please understand. If you have any questions, feel free to leave a comment, and I will respond as soon as I see it.

virtual idol Twitter

Questions

  • The model's response results are for reference only, please do not fully trust them.
  • This model is solely for learning and testing purposes, and errors in output are inevitable. We do not take responsibility for the output results. If the output content is to be used, it must be modified; if not modified, we will assume it has been altered.
  • For commercial licensing, please refer to the Llama 3.1 agreement.

Stop Strings

    stop = [
      "## Instruction:",
      "### Instruction:",
      "<|end_of_text|>",
      "  //:",
      "</s>",
      "<3```",
      "### Note:",
      "### Input:",
      "### Response:",
      "### Emoticons:"
    ],

More Model Use

character

If you want to use vision functionality:

  • You must use the latest versions of Koboldcpp.

To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. Llava MMProj

  • You can load the mmproj by using the corresponding section in the interface: image/png
Downloads last month
986
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .