English
File size: 537 Bytes
bee5cf7
 
 
 
 
 
f699dd8
 
9750ac3
 
9748aee
9750ac3
9541193
ff40f34
9748aee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
license: mit
datasets:
- roneneldan/TinyStories
language:
- en
---

This is a Llama 2 architecture model series trained on the TinyStories dataset, intended for use in the [llama2.c](https://github.com/karpathy/llama2.c) project by Andrej Karpathy. 

Trained on a single v100 32GB GPU for 3 epochs, we achieve an inference speed of ~72 tokens/sec on the same.

Achieved tok/s: **161.819538** on 12th Gen Intel(R) Core(TM) i9-12900HK

Learn more on how to run inference in pure C using [llama2.c](https://github.com/karpathy/llama2.c)