|
--- |
|
license: apache-2.0 |
|
--- |
|
# NEO |
|
|
|
[π€Neo-Models](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [π€Neo-Datasets](https://huggingface.co/collections/m-a-p/neo-datasets-66395dc55cbebc0a7767bbd5) | [Github](https://github.com/multimodal-art-projection/MAP-NEO) |
|
|
|
Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details. |
|
|
|
## Model |
|
|
|
| Model | Describe | Download | |
|
|---|---|---| |
|
neo_7b| This repository contains the base model of neo_7b | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_7b) |
|
neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate) |
|
neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay) |
|
neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M) |
|
neo_scalinglaw_460M | This repo contains ckpts related to scalinglaw experiments | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_460M) |
|
neo_scalinglaw_250M | This repo contains ckpts related to scalinglaw experiments | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_250M) |
|
neo_2b_general | This repo contains ckpts of 2b model trained using common domain knowledge | β’ [π€ Hugging Face](https://huggingface.co/m-a-p/neo_2b_general) |
|
|
|
### Usage |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_path = '<your-hf-model-path-with-tokenizer>' |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
device_map="auto", |
|
torch_dtype='auto' |
|
).eval() |
|
|
|
input_text = "A long, long time ago," |
|
|
|
input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device) |
|
output_ids = model.generate(**input_ids, max_new_tokens=20) |
|
response = tokenizer.decode(output_ids[0], skip_special_tokens=True) |
|
|
|
print(response) |
|
``` |
|
|
|
|
|
|