license: mit | |
This is the 15M parameter Llama 2 architecture model trained on the TinyStories dataset. | |
These are converted from | |
[karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas). | |
See the [llama2.c](https://github.com/karpathy/llama2.c) project for more details. |