File size: 2,010 Bytes
85398e8
 
e788a35
 
 
 
 
 
 
85398e8
0e6d4ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: mit
language:
- tt
tags:
- tt
- tatar
- books
pretty_name: Collection of books in Tatar language in Cyrillic script
---
# Tatar Books Collection (Cyrillic) 📚

This dataset, hosted by [Yasalma](https://huggingface.co/neurotatarlar), contains a curated collection of Tatar books in `.txt` format. The texts are in Cyrillic script and aim to support linguistic research, language modeling, and various natural language processing (NLP) applications for the Tatar language.

## Dataset Details

- **Language**: Tatar (Cyrillic script)
- **Format**: `.txt` files
- **License**: MIT
  
### Structure

Each text file corresponds to a unique Tatar book. The texts are structured as plain text, making it easy to parse and tokenize for NLP tasks.

## Potential Use Cases

- **Language Modeling**: Train language models specifically for Tatar in Cyrillic script.
- **Machine Translation**: Use as a dataset for Cyrillic-to-Latin transliteration and other translation tasks.
- **Linguistic Research**: Analyze linguistic structures, grammar, and vocabulary in Tatar.

## Examples

Below is a short snippet from a sample text in the dataset:

```
Юл газабы - гүр газабы, дисәләр дә, җәфа күрмичә генә, Мәккә каласына килеп төштек. Кунакханәләрнең шәп дигәненә урнаштырдылар. Ашханәсе этаж саен, ашау-эчү бушка. Гыйбадәтеңне генә калдырма! Шулай итеп, без Согуд Гарәбстаны короленең кунаклары статусындагы хаҗилар булдык.
```

## Usage

To load the dataset using Hugging Face's `datasets` library:

```python
from datasets import load_dataset

dataset = load_dataset("neurotatarlar/tt-books-cyrillic")
```

## Contributions and Acknowledgements

This dataset is maintained by the Yasalma team. Contributions, feedback, and suggestions are welcome to improve and expand the dataset.