Edit model card

This is a 2-bit quantization of @migtissera Tess-M-34b-v1.4 using quip# (https://cornell-relaxml.github.io/quip-sharp/) with hessian context lenght 8k.

"Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-M-v1.4 was trained on the Yi-34B-200K base."

Perplexity on the dev set as repoted by quip# was slightly below 7 compared to slightly below 6 of the original. Inference with the model is a bit slow, but with the long context length it should provide one of the best performing few-shot models for consumer and data science GPUs, especially if the instances are longer and the answers relatively short.

Prompt Format:

SYSTEM: <ANY SYSTEM CONTEXT>
USER: 
ASSISTANT:

image/png

I am able to use this model in the widely known textgen-webui. For installation I suggest to follow these steps:

  1. Put the current quip# library folder into the repositories folder of the textgen-webui folder.
  2. install the requirements of quip#
  3. compile and install the quiptools cuda lib:
pip install fast-hadamard-transform glog==0.3.1 primefac==2.0.12
cd repositories/quip-sharp/quiptools
python setup.py install --force
  1. reinstall the requirements of textgen-webui
  2. load the model with the quip# integration of textgen-webui

You can use the library of this repo also for scripts. Within the quip# folder, after installing the library, use this command:

python interactive_gen.py --hf_path path_to_the_2bitmodel --max_length 500

License The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the Model License Agreement 2.0. To apply for the official commercial license, please contact us (yi@01.ai).

Downloads last month
32
Safetensors
Model size
1.97B params
Tensor type
FP16
·
I64
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.