--- license: llama3 library_name: peft language: - en tags: - trl - sft - unsloth - generated_from_trainer - dna base_model: gradientai/Llama-3-8B-Instruct-262k model-index: - name: llama3-biotokenpretrain-kaniwa results: [] --- # llama3-biotokenpretrain-kaniwa This is a LoRA adapter. The base model is the longer-context LLaMA-3-8b-Instruct developed by Gradient and Crusoe: `gradientai/Llama-3-8B-Instruct-262k` The tokenizer has added "biotokens" ∎A, ∎C, ∎G, and ∎T. The dataset was 0.5% of BYU's 2019 kaniwa (*Chenopodium pallidicaule*) genome, from https://genomevolution.org/coge/GenomeInfo.pl?gid=53872 The adapter was finetuned for 3 hours on an L4 GPU. The data was split into ~7k nucleotide snippets with an Alpaca like message format. Training Notebook: https://colab.research.google.com/drive/1FKA3p_jnfRHYd-hqJdYmKn8MQpxec0t5?usp=sharing Sample message: ``` Write information about the nucleotide sequence. ### Sequence: ∎G∎C∎C∎T∎A∎T∎A∎G∎T∎G∎T∎G∎T∎A∎G... ### Annotation: Information about location in the kaniwa chromosome: >lcl|Cp5 ``` This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 280 ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1 ### Genome Citation Mangelson H, et al. The genome of *Chenopodium pallidicaule*: an emerging Andean super grain. Appl. Plant Sci. 2019;7:e11300. doi: 10.1002/aps3.11300