Edit model card

DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs

Model Information

This model is a fine-tuned semantic parsing LLM agent for KGQA. We fine-tune the llama-2-13B on our curated reasoning trajectory https://huggingface.co/datasets/UKPLab/dara.

Model Usage from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained( "UKPLab/dara-llama-2-13b", torch_dtype=torch.float16, device_map="auto", cache_dir = "cache" ) For more information, please check the repository https://github.com/UKPLab/acl2024-DARA

Hyperparameters Learning rate: 2e-5 Batch size: 4 Training epochs: 10

Downloads last month
5
Safetensors
Model size
13B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train UKPLab/dara-llama-2-13b