Edit model card

LLama-3-8B-Tele-it Model Card

Model Summary

The language model LLama-3-8B-Tele-it is an instruct version of LLama-3-8B-Tele, which is based on Meta LLama-3-8B and specialized in telecommunications. It was fine-tuned to follow instructions using Supervised Fine-tuning (SFT) with a combination of the Alpaca and Open-instruct datasets.

Context Length

The context length of the model is 8192 tokens.

Usage

LLama-3-8B-Tele-it has been fine-tuned using pairs of instructions and responses from the Alpaca and Open-instruct datasets, separated by the "\n" delimiter. Below is an example of how to query the model using this format:

Prompt: Explain to me Shannon capacity.\n

Model: Shannon capacity is a measure of the maximum amount of information that can be transmitted reliably over a noisy communication channel. It is named after the mathematician Claude Shannon, who developed the concept in the 1940s. The capacity of a channel is determined by the amount of noise that can be tolerated and the bandwidth of the channel.

The capacity of a channel is calculated using the formula:

C = B * log2(1 + SNR)

where C is the channel capacity, B is the bandwidth of the channel, and SNR is the signal-to-noise ratio.

Sample Code

Below we share some code snippets on how to get quickly started with running the model. First, make sure to pip install transformers, then copy the snippet corresponding to your hardware and adapt it to your usecase.

Running the model on a CPU

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("AliMaatouk/LLama-3-8B-Tele-it", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("AliMaatouk/LLama-3-8B-Tele-it")

prompt = "Explain to me Shannon capacity.\n"
input_ids = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=100)

generated_tokens = outputs[0, len(input_ids['input_ids'][0]):]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(response)

Running the model on a single / multi GPU

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("AliMaatouk/LLama-3-8B-Tele-it", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AliMaatouk/LLama-3-8B-Tele-it")

prompt = "Explain to me Shannon capacity.\n"
input_ids = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=100)

generated_tokens = outputs[0, len(input_ids['input_ids'][0]):]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(response)

Citation

You can find the paper with all details about the model at https://arxiv.org/abs/2409.05314. Please cite it as follows:

@misc{maatouk2024telellmsseriesspecializedlarge,
      title={Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications}, 
      author={Ali Maatouk and Kenny Chirino Ampudia and Rex Ying and Leandros Tassiulas},
      year={2024},
      eprint={2409.05314},
      archivePrefix={arXiv},
      primaryClass={cs.IT},
      url={https://arxiv.org/abs/2409.05314}, 
}
Downloads last month
17
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for AliMaatouk/LLama-3-8B-Tele-it

Quantizations
1 model

Space using AliMaatouk/LLama-3-8B-Tele-it 1

Collection including AliMaatouk/LLama-3-8B-Tele-it