Edit model card

DeciLM-7B-instruct GGUF checkpoints

This repository includes Deci/DeciLM-7B-instruct checkpoints in the GGUF format.
DeciLM demonstrates strong performance on commodity CPUs when utilizing the llama.cpp codebase.

1. Clone and build llama.cpp (1 minute)

git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j

2. Download the GGUF checkpoint

  • Navigate to the 'Files' section
    • Click on 'decilm-7b-uniform-gqa-q8_0.gguf'
      • Click on the 'Download' button

3. Generate outputs

  • Use the chat template and feed a prompt to DeciLM-7B-instruct. We are using the INT8 quantized GGUF checkpoint.

    ./main -m ~/Downloads/decilm-7b-uniform-gqa-q8_0.gguf -p """
    ### System:
    You are an AI assistant that follows instructions exceptionally well. Be as helpful as possible.
    ### User:
    How do I make the most delicious pancakes the world has ever tasted?
    ### Assistant:
    """
    
  • Output:

    ### System:
    You are an AI assistant that follows instructions exceptionally well. Be as helpful as possible.
    ### User:
    How do I make the most delicious pancakes the world has ever tasted?
    ### Assistant:
     To make the most delicious pancakes (pancakes) the world has ever tasted, here are some steps you can follow:
    
    
    Ingreat ingredients:
    - 1 cup all-purpose flour (maida flour)- 1/4 cup sugar (granhana or raw white)
    - 2 egg yolks
    - 1/2 cup milk-2 tabls melted butter (or oil)-2 tsp vanila
    - 2 tsps baking powder
    - salt (a little)
    
    Mixing:
    1. Combine the flour, sugar and salt in a large mixing bowl. Add egg yolks, butter or oil, and vanilla to the mixture, stirring well.
    2. Pour in milk gradually while blending (with mixer).
    3. Beat with the batter till smooth, then add baking powder.
    
    Cooking:
    4. Heat a nonstick pan with little butter or oil, pour 1/4 cup of batter (or 5 inch) for each pancake. Cook over medium heat, until bubble rises on top surface. Flip and cook the other side till golden.  [end of text]
    
    llama_print_timings:        load time =     325.09 ms
    llama_print_timings:      sample time =      19.99 ms /   249 runs   (    0.08 ms per token, 12454.98 tokens per second)
    llama_print_timings: prompt eval time =     239.66 ms /    52 tokens (    4.61 ms per token,   216.97 tokens per second)
    llama_print_timings:        eval time =   11227.83 ms /   248 runs   (   45.27 ms per token,    22.09 tokens per second)
    llama_print_timings:       total time =   11517.22 ms
    ggml_metal_free: deallocating
    Log end
    
Downloads last month
117
GGUF
Model size
7.11B params
Architecture
llama

8-bit

16-bit

32-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Deci/DeciLM-7B-instruct-GGUF 4

Collection including Deci/DeciLM-7B-instruct-GGUF