jurisprudence / v2024.09.12.md
antoinejeannot's picture
✨ v2024.09.19 🏛️
9d9bb41 verified

Dataset on HF GitHub

✨ Jurisprudence, release v2024.09.12 🏛️

Jurisprudence is an open-source project that automates the collection and distribution of French legal decisions. It leverages the Judilibre API provided by the Cour de Cassation to:

  • Fetch rulings from major French courts (Cour de Cassation, Cour d'Appel, Tribunal Judiciaire)
  • Process and convert the data into easily accessible formats
  • Publish & version updated datasets on Hugging Face every few days.

It aims to democratize access to legal information, enabling researchers, legal professionals and the public to easily access and analyze French court decisions. Whether you're conducting legal research, developing AI models, or simply interested in French jurisprudence, this project might provide a valuable, open resource for exploring the French legal landscape.

📊 Exported Data

Jurisdiction Jurisprudences Oldest Latest Tokens JSONL (gzipped) Parquet
Total 0 9999-12-31 1-01-01 0 0.00 B 0.00 B

Latest update date: 2024-09-12

# Tokens are computed using GPT-4 tiktoken and the text column.

🤗 Hugging Face Dataset

The up-to-date jurisprudences dataset is available at: https://huggingface.co/datasets/antoinejeannot/jurisprudence in JSONL (gzipped) and parquet formats.

This allows you to easily fetch, query, process and index all jurisprudences in the blink of an eye!

Usage Examples

HuggingFace Datasets

# pip install datasets
import datasets

dataset = load_dataset("antoinejeannot/jurisprudence")
dataset.shape
>> {'tribunal_judiciaire': (58986, 33),
'cour_d_appel': (378392, 33),
'cour_de_cassation': (534258, 33)}

# alternatively, you can load each jurisdiction separately
cour_d_appel = load_dataset("antoinejeannot/jurisprudence", "cour_d_appel")
tribunal_judiciaire = load_dataset("antoinejeannot/jurisprudence", "tribunal_judiciaire")
cour_de_cassation = load_dataset("antoinejeannot/jurisprudence", "cour_de_cassation") 

Leveraging datasets allows you to easily ingest data to PyTorch, Tensorflow, Jax etc.

BYOL: Bring Your Own Lib

For analysis, using polars, pandas or duckdb is quite common and also possible:

url = "https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_de_cassation.parquet"  # or tribunal_judiciaire.parquet, cour_d_appel.parquet

# pip install polars
import polars as pl
df = pl.scan_parquet(url)

# pip install pandas
import pandas as pd
df = pd.read_parquet(url)

# pip install duckdb
import duckdb
table = duckdb.read_parquet(url)

🪪 Citing & Authors

If you use this code in your research, please use the following BibTeX entry:

@misc{antoinejeannot2024,
author = {Jeannot Antoine and {Cour de Cassation}},
title = {Jurisprudence},
year = {2024},
howpublished = {\url{https://github.com/antoinejeannot/jurisprudence}},
note = {Data source: API Judilibre, \url{https://www.data.gouv.fr/en/datasets/api-judilibre/}}
}

This project relies on the Judilibre API par la Cour de Cassation, which is made available under the Open License 2.0 (Licence Ouverte 2.0)

It scans the API every 3 days at midnight UTC and exports its data in various formats to Hugging Face, without any fundamental transformation but conversions.

license ouverte / open license