ADBL2-Mistral-7B
ADBL2-Mistral-7B is an fine-tuned version of Mistral-7B-v0.1 trained to perform relation-based argument mining. Giving two arguments x and y, we use this model in synergy with LMQL to predict wether y is attacking or supporting x.
Fine-tuning
We fine-tunde Mistral-7B-v0.1 using the PEFT method QLoRA and argument pairs coming from the online debate tool Kialo.
Prompt format
This model has been trained to complete this prompt format:
<s>[INST]
Argument 1 : /*Argument 1*/
Argument 2 : /*Argument 2*/
[/INST]
Relation :
by the relation attack or support
Relation : attack/support
</s>
Example :
Giving two arguments, where argument 2 is attacking the argument 1, :
- Argument 1 : using machines is advantageous
- Argument 2 : the usage of machines is harmful for health of humans
The prompt to retrieve the relation between the second and the first argument should be :
<s>[INST]
Argument 1 : using machines is advantageous
Argument 2 : the usage of machines is harmful for health of humans
[/INST]
Relation :
Our model should complete this prompt this way :
<s>[INST]
Argument 1 : using machines is advantageous
Argument 2 : the usage of machines is harmful for health of humans
[/INST]
Relation : attack
</s>
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.