EagleX_1-7T_HF / README.md
picocreator's picture
readme update
c7ecaf3
|
raw
history blame
2.41 kB
metadata
license: apache-2.0

An eagle soaring above a transformer robot

Huggingface EagleX 1.7T Model - via HF Transformers Library

! Important Note !

The following is the HF transformers implementation of the EagleX 7B 1.7T model. This is meant to be used with the huggingface transformers

For the full model weights on its own, to use with other RWKV libraries, refer to here

This is not an instruct tune model! (soon...)

Running on GPU via HF transformers

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

def generate_prompt(instruction, input=""):
    instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
    input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
    if input:
        return f"""Instruction: {instruction}

Input: {input}

Response:"""
    else:
        return f"""User: hi

Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.

User: {instruction}

Assistant:"""


model = AutoModelForCausalLM.from_pretrained("recursal/EagleX_1-7T_HF", trust_remote_code=True, torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("recursal/EagleX_1-7T_HF", trust_remote_code=True)

text = "Tell me a fun fact"
prompt = generate_prompt(text)

inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))

output:

User: hi

Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.

User: Tell me a fun fact

Assistant: Did you know that the human brain has 100 billion neurons?