Datasets:
.png)
BhashaBench-Legal (BBL): Benchmarking AI on Indian Legal Knowledge
Overview
BhashaBench-Legal (BBL) is the comprehensive benchmark designed to rigorously evaluate AI models on Indian legal knowledge. Tailored for India's complex legal framework, constitutional structure, and diverse jurisprudential traditions, BBL draws from official judicial service exams, bar examinations, and legal education assessments to test models' ability to provide accurate, contextually relevant, and legally sound advice within the Indian legal system.
Key Features
- Languages: English and Hindi (with plans for more Indic languages)
- Exams: 50+ unique legal government and institutional exams across India
- Domains: 20+ legal and allied disciplines, spanning over 200 specialized topics
- Questions: 24,365 rigorously validated, exam-based questions
- Difficulty Levels: Easy (8,200), Medium (12,150), Hard (4,015)
- Question Types: Multiple Choice, Assertion-Reasoning, Match the Column, Rearrange the Sequence, Fill in the Blanks
- Focus: Practical, context-rich, jurisdiction-specific legal knowledge essential for Indian legal practice
Dataset Statistics
Metric | Count |
---|---|
Total Questions | 24,365 |
English Questions | 17,047 |
Hindi Questions | 7,318 |
Subject Domains | 20+ |
Government Exams Covered | 50+ |
Dataset Structure
Test Set
The test set consists of the BhashaBench-Legal (BBL) benchmark, which contains approximately 24,365 multiple-choice questions across 2 Indic languages (English and Hindi).
We will add support for more Indic languages in upcoming versions.
Subjects spanning BBL
Subject Domain | Count |
---|---|
Civil Litigation & Procedure | 7126 |
Constitutional & Administrative Law | 3609 |
Criminal Law & Justice | 2769 |
Corporate & Commercial Law | 2700 |
General Academic Subjects | 1756 |
Legal Theory & Jurisprudence | 1421 |
Family & Personal Law | 991 |
International & Comparative Law | 962 |
Legal Skills & Communication | 816 |
Real Estate & Property Law | 629 |
Environmental & Energy Law | 430 |
Interdisciplinary Studies | 363 |
Tax & Revenue Law | 231 |
Employment & Labour Law | 175 |
Technology & Cyber Law | 123 |
Intellectual Property Law | 91 |
Consumer & Competition Law | 75 |
Media & Entertainment Law | 54 |
Healthcare & Medical Law | 25 |
Human Rights & Social Justice | 19 |
Usage
Since this is a gated dataset, after your request for accessing the dataset is accepted, you can set your HuggingFace token:
export HF_TOKEN=YOUR_TOKEN_HERE
To load the BBL dataset for a Language:
from datasets import load_dataset
language = 'Hindi'
# Use 'test' split for evaluation
split = 'test'
language_data = load_dataset("bharatgenai/BhashaBench-Legal", data_dir=language, split=split, token=True)
print(language_data[0])
Evaluation Results Summary
25+ models evaluated, including GPT-4o, Claude-3.5, and various specialized legal LLMs.
Top accuracy:
- English: 75%+ by best models
- Hindi: 65–70%, showing need for enhanced Indic language legal reasoning
Strong domains:
- Constitutional Law, Legal Theory, Corporate Law (~80% accuracy)
Weak domains:
- Technology & Cyber Law, Environmental Law, Healthcare Law (<55%)
Challenges:
- Complex procedural questions and jurisdiction-specific cases remain challenging across models
For detailed results and analysis, please refer to our blog.
Citation
Please cite our benchmark if used in your work:
@misc{bhashabench-legal-2025,
title = {BhashaBench-Legal: Benchmarking AI on Indian Legal Knowledge},
author = {BharatGen Research Team},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/bharatgenai/bhashabench-legal}},
note = {Accessed: YYYY-MM-DD}
}
License
This dataset is released under the CC BY 4.0.
Contact
For any questions or feedback, please contact:
- Vijay Devane (vijay.devane@tihiitb.org)
- Mohd. Nauman (mohd.nauman@tihiitb.org)
- Bhargav Patel (bhargav.patel@tihiitb.org)
- Kundeshwar Pundalik (kundeshwar.pundalik@tihiitb.org)
Links
- Downloads last month
- 112