Edit model card

TinyLLama 1.5T checkpoint trained to answer questions.

f"{'prompt'}\n{'completion'}\n<END>"

No special formatting, just question, then newline to begin the answer.

from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

pipe = pipeline("text-generation", model="Corianas/tiny-llama-miniguanaco-1.5T")# Load model directly

tokenizer = AutoTokenizer.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")
model = AutoModelForCausalLM.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")

# Run text generation pipeline with our next model
prompt = "What is a large language model?"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500)
result = pipe(f"<s>{prompt}")
print(result[0]['generated_text'])

Result will have the answer, ending with <END> on a new line.

Downloads last month
1,305
Safetensors
Model size
1.1B params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Corianas/tiny-llama-miniguanaco-1.5T

Quantizations
1 model