Edit model card

Model Card for Model ID

This model is an LoRA adapter file from finetuned Llama-2-7b-hf model. This is an experimental model.

To run it, you need to:

Agree with Meta's agreements to download the Llama-2-13b-chat-hf model from here: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf Clone this repository Clone the Alpaca-LoRA repository from here: https://github.com/tloen/alpaca-lora Use this command to run it: -python generate.py --load_8bit --base_model 'PATH_TO_YOUR_LOCAL_LLAMA_2_7B_CHAT_HF' --lora_weights 'PATH_TO_YOUR_LOCAL_FILE_OF_THIS_MODEL' You must agree with Meta/Llama-2's agreements to use this model.

Model Details

Model Description

  • Developed by: Saja Nakhleh

  • Model type: Question answering model

  • Language(s) (NLP): Arabic

  • Finetuned from model [optional]: llama-2-7b

Model Sources [optional]

  • Paper [optional]: Not Yet

Bias, Risks, and Limitations

This model performs well with hetrogenius data.

Recommendations

None

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoConfig, AutoModel, AutoTokenizer config = AutoConfig.from_pretrained("sajaw/llama-2-7b-RandomGPT-5K-ar") model = AutoModel.from_pretrained("sajaw/llama-2-7b-RandomGPT-5K-ar") tokenizer = AutoTokenizer.from_pretrained("sajaw/llama-2-7b-RandomGPT-5K-ar")

Training Details

Training Data

  • sajaw/Arasquad3_llama2_version2
  • sajaw/GQA_llama2_version

Preprocessing [optional]

Context, questions and answers are concatinated with the instructions in one "message" record

Training Hyperparameters

  • Training regime: NA

Speeds, Sizes, Times [optional]

NA

Evaluation

Testing Data, Factors & Metrics

Testing Data

We have used 250 samples from AraSquad as true samples to test the model

Factors

NA

Metrics

F1-score, precision, recall

Results

F1-score= 0.6818 precision= 0.6564 recall=0.7226

Summary

Model Examination [optional]

NA

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: kaggle - GPU T4 *2
  • Hours used: 9 hours
  • Cloud Provider: kaggle
  • Compute Region: NA
  • Carbon Emitted: NA

Technical Specifications [optional]

Model Architecture and Objective

NA

Compute Infrastructure

NA

Hardware

NA

Software

NA

Citation [optional]

BibTeX:

APA:

Glossary [optional]

NA

More Information [optional]

NA

Model Card Authors [optional]

Saja Nakhleh

Model Card Contact

swnakhleh21@cit.just.edu.jo

Downloads last month
2,479
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sajaw/llama-2-7b-RandomGPT-5K-ar

Quantizations
1 model

Datasets used to train sajaw/llama-2-7b-RandomGPT-5K-ar