Datasets:
File size: 2,836 Bytes
2323502 3322f89 2323502 3322f89 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
tags:
- legal
- patent
size_categories:
- 1K<n<10K
pretty_name: HUPD-DCG
---
# Dataset Card for HUPD-DCG (Description-based Claim Generation)
<!-- Provide a quick summary of the dataset. -->
This dataset is used for generating patent claims based on the patent description.
## Dataset Information
<!-- Provide a longer summary of what this dataset is. -->
- **Repository:** https://github.com/scylj1/LLM4DPCG
- **Paper:** https://arxiv.org/abs/2406.19465
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
There are two zip files for training and test data. Training and test folders consist of 8,244 and 1,311 patent files respectively. Each file is in JSON format that
includes detailed information of a patent application. We use the claims and full description parts.
## Dataset Creation
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
[The Harvard USPTO Patent Dataset (HUPD)](https://huggingface.co/datasets/HUPD/hupd/) is a recently collected large-scale multi-purpose patent dataset, including more than 4.5 million patent documents with 34 data fields (patent description, abstract, and claims are included). HUPD-DCG is collected by filtering a portion of target documents based on this large dataset.
### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Firstly, we selected all the patent documents filed in 2017. We eliminated any pending or rejected documents and only kept granted documents to formulate a high-quality dataset for claim generation. Considering the context length of some LLMs, such as Llama-3, we opted for documents with a description length smaller than 8,000 tokens in our experiments. In practical settings, models are developed by training on existing patent documents and subsequently employed for new applications. To simulate realistic scenarios, we ordered the documents by date and used the last 1,311 documents for testing, which is about 14\% of the whole dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```
@article{jiang2024can,
title={Can Large Language Models Generate High-quality Patent Claims?},
author={Jiang, Lekang and Zhang, Caiqi and Scherz, Pascal A and Goetz, Stephan},
journal={arXiv preprint arXiv:2406.19465},
year={2024}
}
```
|