Edit model card

tog/TinyLlama-1.1B-alpaca-chat-v1.5-GGUF

Quantized GGUF model files for TinyLlama-1.1B-alpaca-chat-v1.5 from tog

Original Model Card:

This Model

This is the chat model finetuned on top of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T. The dataset used is tatsu-lab/stanford_alpaca.

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

You can use it with the transformers library:

from transformers import AutoTokenizer
import transformers
import torch

model = "tog/TinyLlama-1.1B-alpaca-chat-v1.5"
tokenizer = AutoTokenizer.from_pretrained(model)

pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto")

sequences = pipeline(
    '###Instruction:\nWhat is a large language model? Be concise.\n\n### Response:\n',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200)

for seq in sequences:
    print(f"{seq['generated_text']}")

You should get something along those lines:

Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Result: ###Instruction:
What is a large language model? Be concise.

### Response:
A large language model is a type of natural language understanding model that can learn to accurately recognize and interpret text data by understanding the context of words. Languages used for text understanding are typically trained on a corpus of text data.
Downloads last month
136
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/TinyLlama-1.1B-alpaca-chat-v1.5-GGUF

Quantized
(1)
this model

Dataset used to train afrideva/TinyLlama-1.1B-alpaca-chat-v1.5-GGUF

Collection including afrideva/TinyLlama-1.1B-alpaca-chat-v1.5-GGUF