--- tags: - merge - mergekit - nvidia/Llama3-ChatQA-1.5-8B - refuelai/Llama-3-Refueled models: - nvidia/Llama3-ChatQA-1.5-8B - refuelai/Llama-3-Refueled license: apache-2.0 language: - en --- ![](https://raw.githubusercontent.com/saucam/models/main/powerbot.png) # PowerBot-8B PowerBot-8B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit): * [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) * [refuelai/Llama-3-Refueled](https://huggingface.co/refuelai/Llama-3-Refueled) ## 🧩 Configuration ```yamlname: PowerBot-8B tokenizer_source: union embed_slerp: true models: - model: nvidia/Llama3-ChatQA-1.5-8B parameters: density: 0.5 weight: 0.6 # No parameters necessary for base model - model: refuelai/Llama-3-Refueled parameters: density: 0.5 weight: 0.4 merge_method: dare_ties base_model: nvidia/Llama3-ChatQA-1.5-8B parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "saucam/PowerBot-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ``` Loading checkpoint shards: 100%|███████████████████████████████████████████████████| 2/2 [00:07<00:00, 3.75s/it] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation. <|im_start|>user What is a large language model?<|im_end|> <|im_start|>assistant A large language model (LLM) is a deep neural network that is trained to predict the next word in a sequence of text. LLMs are typically trained on large amounts of text data and can be used for a variety of tasks such as language translation, text completion, and question answering. They are often used to generate human-like text and are becoming increasingly popular in natural language processing applications. The LLM uses a transformer architecture, which consists of multiple layers of neural networks that are trained to process and understand the relationships between words in a sentence. The transformer architecture is designed to handle long sequences of text and is capable of capturing the context of a word within a sentence. This allows the LLM to generate coherent and grammatically correct text that is similar to human writing. LLMs are typically trained on a large corpus of text data and can be fine-tuned for specific tasks by retraining on smaller datasets that are relevant to the task at hand. This allows the LLM to adapt to the specific requirements of a particular application and improve its performance. The LLM can be used to generate text in a variety of formats, including natural language, code, and even mathematical expressions. It can also be used to translate text from one language to another, generate summaries of ``` Preserves the whole document as context capability ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "saucam/PowerBot-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [ {"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"} ] document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |""" def get_formatted_input(messages, context): system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context." instruction = "Please give a full and complete answer for the question." for item in messages: if item['role'] == "user": ## only apply this instruction for the first user turn item['content'] = instruction + " " + item['content'] break conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:" formatted_input = system + "\n\n" + context + "\n\n" + conversation return formatted_input formatted_input = get_formatted_input(messages, document) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ``` Downloading shards: 100%|█████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 12.71it/s] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████| 2/2 [00:08<00:00, 4.05s/it] Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation. The percentage change of the net income from Q4 FY23 to Q4 FY24 is 769%. This is calculated by taking the difference between the two net incomes ($12,285 million and $1,414 million) and dividing it by the net income from Q4 FY23 ($1,414 million), then multiplying by 100 to get the percentage change. So, the formula is ((12,285 - 1,414) / 1,414) * 100 = 769%. ``` Sample run on classification tasks, positive labelling still works ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "saucam/PowerBot-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [{"role": "user", "content": "Is this comment toxic or non-toxic: RefuelLLM is the new way to label text data!"}] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda") outputs = model.generate(inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0])) ``` ``` Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████| 2/2 [00:07<00:00, 3.89s/it] No chat template is defined for this tokenizer - using a default chat template that implements the ChatML format (without BOS/EOS tokens!). If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information. The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation. <|im_start|>user Is this comment toxic or non-toxic: RefuelLLM is the new way to label text data!<|im_end|> <|im_start|>assistant This comment is non-toxic. <|im_end|><|end_of_text|> ```