File size: 813 Bytes
71df629
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: mit
datasets:
- rhyliieee/notes-completion-set
base_model:
- aaditya/Llama3-OpenBioLLM-8B
pipeline_tag: text-generation
library_name: transformers
---
Finetuned a pretrained Model with Lora, resize the base model's embeddings, then load Peft Model with the resized base model.

"""
# add special tokens to the tokenizer and base model before merging peft with base
open_tokenizer.add_special_tokens({
    "additional_special_tokens": ["<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"]
})
base_model.resize_token_embeddings(len(open_tokenizer))

# reload the peft model with resized token embedding of base model
peft_model = PeftModel.from_pretrained(base_model, "rhyliieee/LLaMA3-8Bit-Lora-Med-v2",)

# perform merging
merged_peft_base_with_special_tokens = peft_model.merge_and_unload()
"""