File size: 2,496 Bytes
3ef71d9 6f451d4 197551a 3ef71d9 6f451d4 3ef71d9 07cbca3 3ef71d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: openrail
language:
- fr
tags:
- french
- philosophy
- quebec
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
## Dataset Description
### Dataset Summary
This dataset contains all french philosophy that has been published on erudit.org. It has been generated using a Bs4 web parser that you can find in this repo: https://github.com/MFGiguere/french-philosophy-generator.
### Supported Tasks and Leaderboards
This dataset could be useful for this (non-exhaustive) set of tasks: detect if a text is philosophical or not, generate philosophical sentences, generate an abstract from an article, ...
### Languages
The database includes includes all journals where the main language is french but might include non-french sentences from quotes or special editions.
## Dataset Structure
### Data Instances
Each row of the databse is a sentence and each column is a text's metadata.
### Data Fields
The data is structured as follow, which makes it possible to combine sentences into paragraphs, sections or whole texts.
```
features = {
"Journal": str, #The name of the journal where the text was published
"Author": str, #Required to be able to generate texts by author.
"Year": str, #Will help form a sense of direction on a large scale.
"Title": str, #Can be useful for smaller dataset, but can be inferred with enough files.
"section_rank": int, #Abstract will be 0 and sections will start as 1.
"par_rank": int, #Abstract will be 0 and paragraphs will start as 1.
"sent_rank": int, #no of sentence in the paragraph
"text": str #Will be single sentence at a time.
}
```
## Additional Information
### Known limitations
Parsing was done in two phase: first part of the parsing was done on a farm with a poor wifi, so some texts might have been partially or entirely skipped. This is the reason we did a second parsing. A second parsing was done to append missing texts in the dataset.
There were also inconsistencies that I tried to capture with the parser, but some inconcistencies remain and no manual validation of data was made afterward.
### Contributions
This dataset exists because of the Deepmay 2023 bootcamp instructors who gave us a solid instruction to language models and a friend at the Bootcamp that suggested me to host this dataset publicly on here! |