Datasets:
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
tags:
- legal
- patent
size_categories:
- 1K<n<10K
pretty_name: HUPD-DCG
Dataset Card for HUPD-DCG (Description-based Claim Generation)
This dataset is used for generating patent claims based on the patent description.
Dataset Information
- Repository: https://github.com/scylj1/LLM4DPCG
- Paper: https://arxiv.org/abs/2406.19465
Dataset Structure
There are two zip files for training and test data. Training and test folders consist of 8,244 and 1,311 patent files respectively. Each file is in JSON format that includes detailed information of a patent application. We use the claims and full description parts.
Dataset Creation
Source Data
The Harvard USPTO Patent Dataset (HUPD) is a recently collected large-scale multi-purpose patent dataset, including more than 4.5 million patent documents with 34 data fields (patent description, abstract, and claims are included). HUPD-DCG is collected by filtering a portion of target documents based on this large dataset.
Data Collection and Processing
Firstly, we selected all the patent documents filed in 2017. We eliminated any pending or rejected documents and only kept granted documents to formulate a high-quality dataset for claim generation. Considering the context length of some LLMs, such as Llama-3, we opted for documents with a description length smaller than 8,000 tokens in our experiments. In practical settings, models are developed by training on existing patent documents and subsequently employed for new applications. To simulate realistic scenarios, we ordered the documents by date and used the last 1,311 documents for testing, which is about 14% of the whole dataset.
Citation
@article{jiang2024can,
title={Can Large Language Models Generate High-quality Patent Claims?},
author={Jiang, Lekang and Zhang, Caiqi and Scherz, Pascal A and Goetz, Stephan},
journal={arXiv preprint arXiv:2406.19465},
year={2024}
}