derek-thomas's picture
derek-thomas HF staff
Pushing 12 new rows
57e6d45 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: id
      dtype: string
    - name: content
      dtype: string
    - name: score
      dtype: int64
    - name: date_utc
      dtype: timestamp[ns]
    - name: title
      dtype: string
    - name: flair
      dtype: string
    - name: poster
      dtype: string
    - name: permalink
      dtype: string
    - name: nsfw
      dtype: bool
    - name: embedding
      sequence: float64
  splits:
    - name: train
      num_bytes: 180265339
      num_examples: 13118
  download_size: 133176404
  dataset_size: 180265339
tags:
  - not-for-all-audiences

Dataset Card for "reddit-bestofredditorupdates-processed"

More Information needed

--- Generated Part of README Below ---

Dataset Overview

This dataset is based on derek-thomas/dataset-creator-reddit-bestofredditorupdates and will add nomic-ai/nomic-embed-text-v1 embeddings based on the content field.

The goal is to be able to have an automatic and free semantic/neural tool for any subreddit.

The last run was on 2024-11-03 05:00:00 UTC+0000 and updated 12 new rows.

Creation Details

This is done by triggering derek-thomas/processing-bestofredditorupdates based on a repository update webhook to calculate the embeddings and update the nomic atlas visualization. This is done by this processing space.

Update Frequency

The dataset is updated based on a webhook trigger, so each time derek-thomas/dataset-creator-reddit-bestofredditorupdates is updated, this dataset will be updated.

Opt-out

To opt-out of this dataset please make a request in the community tab