|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
![An eagle soaring above a transformer robot](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10cf7fd1-6c72-4a99-84c2-794fb7bc52b3_2432x1664.png) |
|
|
|
### Huggingface EagleX 1.7T Model - via HF Transformers Library |
|
|
|
> **! Important Note !** |
|
> |
|
> The following is the HF transformers implementation of the EagleX 7B 1.7T model. **This is meant to be used with the huggingface transformers** |
|
> |
|
> For the full model weights on its own, to use with other RWKV libraries, refer to [here](https://huggingface.co/recursal/EagleX_1-7T) |
|
> |
|
> This is not an instruct tune model! (soon...) |
|
> |
|
> See the following, for the full details on this experimental model: [https://substack.recursal.ai/p/eaglex-17t-soaring-past-llama-7b](https://substack.recursal.ai/p/eaglex-17t-soaring-past-llama-7b) |
|
> |
|
- [Our cloud platform - the best place to host, finetune, and do inference for RWKV](https://recursal.ai) |
|
- [HF Demo](https://huggingface.co/spaces/recursal/EagleX-7B-1.7T-Gradio-Demo) |
|
- [Our wiki](https://wiki.rwkv.com) |
|
- [pth model weights](https://huggingface.co/recursal/EagleX_1-7) |
|
|
|
#### Running on GPU via HF transformers |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
def generate_prompt(instruction, input=""): |
|
instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n') |
|
input = input.strip().replace('\r\n','\n').replace('\n\n','\n') |
|
if input: |
|
return f"""Instruction: {instruction} |
|
|
|
Input: {input} |
|
|
|
Response:""" |
|
else: |
|
return f"""User: hi |
|
|
|
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it. |
|
|
|
User: {instruction} |
|
|
|
Assistant:""" |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("recursal/EagleX_1-7T_HF", trust_remote_code=True, torch_dtype=torch.float16).to(0) |
|
tokenizer = AutoTokenizer.from_pretrained("recursal/EagleX_1-7T_HF", trust_remote_code=True) |
|
|
|
text = "Tell me a fun fact" |
|
prompt = generate_prompt(text) |
|
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(0) |
|
output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, ) |
|
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) |
|
``` |
|
|
|
output: |
|
|
|
```shell |
|
User: hi |
|
|
|
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it. |
|
|
|
User: Tell me a fun fact |
|
|
|
Assistant: Did you know that the human brain has 100 billion neurons? |
|
``` |
|
|