RespubLit Review Report Evaluator (RespubLitRRE) โ LoRA Adapter
RespubLitRRE is a specialised LoRA adapter for Mistral-7B-Instruct-v0.3, designed to evaluate the quality of peer review reports in research funding and academic contexts.
The model scores and analyses review reports along several dimensions:
- Technical Rigor (TR)
- Constructive Feedback (CF)
- Overall Quality (OQ)
- Relevance to Abstract (RA)
- Bias Signals (e.g. sentiment bias, alignment mismatch between scores and text)
๐ Important: The model evaluates the review text itself, not the underlying project or proposal.
Model Architecture & Base
This repository only contains a LoRA adapter. It must be used together with the base model:
- Base model:
mistralai/Mistral-7B-Instruct-v0.3 - LoRA adapter:
emreozelemre/RespubLitRRE-LoRA - Library: PEFT + Hugging Face Transformers
- Task: Causal LM / text generation (
text-generationpipeline)
The adapter does not contain any grant applications, abstracts, proposals, or review records.
Usage & Examples
For full Python usage (loading the base model + LoRA, quantization setup, and the evaluation prompt format), please refer to the Gradio app implementation in the GitHub repository:
๐ GitHub (full pipeline & app code): https://github.com/emreozelemre/RespubLitRRE
The Gradio app shows how to:
- Load
mistralai/Mistral-7B-Instruct-v0.3 - Attach this LoRA adapter (
emreozelemre/RespubLitRRE-LoRA) - Build an end-to-end evaluation workflow around review reports
Intended Use & Audience
RespubLitRRE is meant to support:
- Research funders and grant agencies
- Research integrity offices and ombudspersons
- Policy analysts and evaluation specialists
- Meta-review workflows and audit exercises
- Academics studying peer review quality and bias
Typical use cases include:
- Flagging superficial or underdeveloped reviews
- Detecting overconfident but weakly justified critiques
- Spotting scoreโtext inconsistencies (e.g. high scores / negative text)
- Identifying missing reasoning and vague justifications
- Highlighting lack of actionable feedback for applicants
- Surfacing potential sentiment bias patterns (tonality vs content)
This model is not intended to automatically decide funding outcomes or replace human judgment.
Repository Contents
This repo contains the minimal LoRA adapter package:
adapter_config.jsonadapter_model.safetensorstokenizer.jsontokenizer.modeltokenizer_config.jsonspecial_tokens_map.jsonchat_template.jinjaREADME.md(this file)
No training data, grant applications, review texts, or private metadata are included.
Full Pipeline & Apps
- ๐ Full pipeline & Gradio app code:
GitHub โ https://github.com/emreozelemre/RespubLitRRE
The pipeline includes:
- Proposal + review alignment
- Bias-report integration
- Visualization of scores and justification
- Additional consistency checks
Training & Methodology (High-Level)
- Base model: Mistral-7B-Instruct-v0.3
- Fine-tuning strategy: PEFT LoRA on top of the base model
- Objective: Teach the model to:
- Analyse review-report text in depth
- Produce structured evaluations (TR, CF, OQ, RA, bias signals)
- Generate clear justifications for the assigned scores
The LoRA adapter was trained as part of a broader RespubLit evaluation and bias-analysis pipeline for grant peer review.
Ethical Use & Limitations
By using this model, you agree not to:
- Automate exclusion of reviewers or blacklist individuals
- Directly manipulate funding outcomes using model output alone
- Use it for surveillance or profiling of reviewer identities
- Deploy it in harmful, deceptive, or discriminatory applications
- Violate any constraints of the OpenRAIL-M non-commercial licence
Limitations & disclaimers:
- The model can be wrong, overconfident, or sensitive to prompt phrasing.
- Outputs should be used as input to human judgment, not as final decisions.
- It is optimised for peer review in research/academic contexts, not for generic product reviews or unrelated domains.
Licence โ Non-Commercial OpenRAIL-M
This LoRA adapter is released under a Non-Commercial OpenRAIL-Mโstyle licence.
Allowed:
- Research use
- Personal use
- Academic evaluation
- Non-commercial experimentation
- Redistribution of the adapter under the same licence
Not allowed without explicit permission:
- Commercial use
- Integration into paid services
- Consultancy or platform use involving this model
- Use inside revenue-generating grant-evaluation tools
- Any monetised derivative work
To obtain a commercial licence, contact:
- ๐ Website: https://www.respublit.org
- ๐ง Email:
emreozel@respublit.org
Citation
If you use this adapter in academic work, please cite:
รzel, E. (2025). RespubLit Review Report Evaluator (RespubLitRRE) โ LoRA Adapter. Hugging Face. https://huggingface.co/emreozelemre/RespubLitRRE-LoRA
Acknowledgements
- Base model: Mistral-7B-Instruct-v0.3 (Mistral AI)
- Libraries: Hugging Face Transformers, PEFT
- Developed as part of the RespubLit evaluation and bias-analysis pipeline for grant peer review and research funding.
๐ Support this project
If the RespubLit Review Report Evaluator is useful for your work, you can buy me a coffee:
- Downloads last month
- 19
Model tree for emreozelemre/RespubLitRRE-LoRA
Base model
mistralai/Mistral-7B-v0.3