Edit model card

Model Card for Model ID

Llama3-8B-1.58 Models

The Llama3-8B-1.58 models are large language models fine-tuned on the BitNet 1.58b architecture, starting from the base model Llama-3-8B-Instruct.

For a deeper dive into the methods and results, check out our blog post.

Model Details

Model Sources

How to Get Started with the Model

You can easily load and test our model in Transformers. Just follow the code below:

Start by installing the transformers version with the correct configuration to load bitnet models

pip install git+https://github.com/huggingface/transformers.git@refs/pull/33410/head

And then load the model :

model = AutoModelForCausalLM.from_pretrained("HF1BitLLM/Llama3-8B-1.58-Linear-10B-tokens", device_map="cuda", torch_dtype=torch.bfloat16)    
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")

input_text = "Daniel went back to the the the garden. Mary travelled to the kitchen. Sandra journeyed to the kitchen. Sandra went to the hallway. John went to the bedroom. Mary went back to the garden. Where is Mary?\nAnswer:"

input_ids = tokenizer.encode(input_text, return_tensors="pt").cuda()
output = model.generate(input_ids, max_length=10, do_sample=False)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)

Training Details

Training Data

The model was trained on a subset of FineWeb-edu

Training Process

  1. Starting Point

    • Initialized from Llama3 8B weights
  2. Training Duration

    • Fine-tuned for 5,000 steps
  3. Dataset

    • FineWeb-edu dataset
  4. Batch Size

    • 2 million tokens per step
    • Total tokens: 5,000 steps * 2 million tokens = 10 billion tokens
  5. Lambda Scheduler

    • Used a linear lambda scheduler for warmup quantization
    • Lambda value: min(training_step/1000, 1)
    • This gradually introduced quantization over the first 1,000 steps
  6. Learning Rate

    • Base learning rate: 1e-4
  7. Performance

    • Achieved impressive results considering the limited training data
    • Outperformed some models trained on much larger datasets (e.g., BitNet 7B trained on 100B tokens)
  8. Evaluation

    • Regular evaluations using various metrics
    • Metrics included perplexity, MMLU scores, and other standard benchmarks
  9. Quantization

    • 1.58-bit (ternary) quantization for weights
    • Activations quantized to 8-bit precision
  10. Key Findings

    • Warmup quantization (sigmoid or linear lambda scheduler) proved crucial for performance

These 10B token training runs showed that it's possible to effectively fine-tune pre-trained models to 1.58-bit precision, achieving strong performance with relatively limited additional training data.

Evaluation

The evaluation of the models is done on the nanotron checkpoints using LightEval :

results

Citation

@misc{,
      title={1.58-Bit LLM: A New Era of Extreme Quantization}, 
      author={Mohamed Mekkouri and Marc Sun and Leandro von Werra and Thomas Wolf},
      year={2024},
}
Downloads last month
617
Safetensors
Model size
2.8B params
Tensor type
BF16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for HF1BitLLM/Llama3-8B-1.58-Linear-10B-tokens

Quantized
(179)
this model

Collection including HF1BitLLM/Llama3-8B-1.58-Linear-10B-tokens