File size: 822 Bytes
c02f66b
67219b5
 
c02f66b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24f3c77
 
 
 
 
d54f090
 
24f3c77
d54f090
24f3c77
d54f090
24f3c77
d54f090
24f3c77
d54f090
2c7d4fc
 
9e5a470
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
language:
- tr
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 292607887
    num_examples: 81708
  - name: validation
    num_bytes: 32771771
    num_examples: 9079
  download_size: 170747517
  dataset_size: 325379658
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---

`habanoz/news-tr-90k` is split into training and validation splits.

Columns are merged into a single text document using following format:

```text
# {x['Title']}

## Özet

{x['Summary']}

## İçerik

{x['Text']}
```

A tokenizer is trained on this dataset with an 8K token vocabulary. According to the tokenizer, this dataset contains 62748074 (62.7M) training tokens and 7015414 (7M) validation tokens.