Pretrained base-d20
This model is trained with the nanochat recipe by Andrej Karpathy.
It was trained with a depth of 20 on 2 billion tokens and corresponds to this tokenizer. I will combine this repo with the tokenizer.
Usage
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
model_dir = "nanochat-students/base-d20"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True)
model = model.to(device)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
prompt = "The capital of Belgium is "
input_ids = tokenizer.encode(prompt, prepend=tokenizer.get_bos_token_id())
ids = torch.tensor([input_ids], dtype=torch.long, device=device)
max_new_tokens = 50
with torch.inference_mode():
for _ in range(max_new_tokens):
outputs = model(input_ids=ids)
logits = outputs["logits"] if isinstance(outputs, dict) else outputs.logits
next_token = torch.argmax(logits[:, -1, :], dim=-1, keepdim=True)
ids = torch.cat([ids, next_token], dim=1)
decoded = tokenizer.decode(ids[0].tolist())
print(decoded)
Base model evaluation
timestamp: 2025-10-14 16:16:53
- Model: base_model (step 21400)
- CORE metric: 0.1963
- hellaswag_zeroshot: 0.2634
- jeopardy: 0.0959
- bigbench_qa_wikidata: 0.4993
- arc_easy: 0.5269
- arc_challenge: 0.1251
- copa: 0.4400
- commonsense_qa: 0.0653
- piqa: 0.3743
- openbook_qa: 0.1440
- lambada_openai: 0.3683
- hellaswag: 0.2630
- winograd: 0.2674
- winogrande: 0.0923
- bigbench_dyck_languages: 0.1050
- agi_eval_lsat_ar: 0.0326
- bigbench_cs_algorithms: 0.3674
- bigbench_operators: 0.1524
- bigbench_repeat_copy_logic: 0.0000
- squad: 0.2222
- coqa: 0.1957
- boolq: -0.4615
- bigbench_language_identification: 0.1801
Base model loss
timestamp: 2025-10-14 16:11:41
- train bpb: 0.8147
- val bpb: 0.8121
- sample 0: <|bos|>The capital of France is Paris. It is the largest city in France and the capital of the country.
- sample 1: <|bos|>The chemical symbol of gold is Au and the atomic number is 79. Gold is a soft, malleable,
- sample 2: <|bos|>If yesterday was Friday, then tomorrow will be Saturday. If today is Monday, then tomorrow will be Tuesday. If today is
- sample 3: <|bos|>The opposite of hot is cold. The opposite of hot is cold. The opposite of hot is cold.
- sample 4: <|bos|>The planets of the solar system are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune,
- sample 5: <|bos|>My favorite color is blue. I love the color blue because it is a color that is so versatile
- sample 6: <|bos|>If 5x + 3 = 13, then x is a factor of 5. If 5x + 3 =
Base model training
timestamp: 2025-10-14 14:28:31
- run: dummy
- depth: 20
- max_seq_len: 2048
- num_iterations: -1
- target_flops: -1.0000
- target_param_data_ratio: 20
- device_batch_size: 32
- total_batch_size: 524,288
- embedding_lr: 0.2000
- unembedding_lr: 0.0040
- weight_decay: 0.0000
- matrix_lr: 0.0200
- grad_clip: 1.0000
- eval_every: 250
- eval_tokens: 10,485,760
- core_metric_every: 2000
- core_metric_max_per_task: 500
- sample_every: 2000
- model_tag:
- Number of parameters: 560,988,160
- Number of FLOPs per token: 3.491758e+09
- Calculated number of iterations: 21,400
- Number of training tokens: 11,219,763,200
- Tokens : Params ratio: 20.0000
- DDP world size: 8
- warmup_ratio: 0.0000
- warmdown_ratio: 0.2000
- final_lr_frac: 0.0000
- Minimum validation bpb: 0.8120
- Final validation bpb: 0.8120
- CORE metric estimate: 0.2059
- MFU %: 48.36%
- Total training flops: 3.917670e+19
- Total training time: 172.18m
- Peak memory usage: 75422.02MiB
Training Logs
Logs are available on the trackio space here
- Downloads last month
- 76
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support