haber-90k-gpt-text / README.md
habanoz's picture
Librarian Bot: Add language metadata for dataset (#2)
67219b5 verified
metadata
language:
  - tr
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 292607887
      num_examples: 81708
    - name: validation
      num_bytes: 32771771
      num_examples: 9079
  download_size: 170747517
  dataset_size: 325379658
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

habanoz/news-tr-90k is split into training and validation splits.

Columns are merged into a single text document using following format:

# {x['Title']}

## Özet

{x['Summary']}

## İçerik

{x['Text']}

A tokenizer is trained on this dataset with an 8K token vocabulary. According to the tokenizer, this dataset contains 62748074 (62.7M) training tokens and 7015414 (7M) validation tokens.