Datasets:

Modalities:
Text
Formats:
parquet
Tags:
Not-For-All-Audiences
Libraries:
Datasets
pandas
File size: 2,255 Bytes
e1ca130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca88828
 
e1ca130
 
 
 
706bffa
 
 
 
2519678
fd34266
e1ca130
 
 
33aa732
 
 
 
 
 
 
 
 
 
 
 
57e6d45
33aa732
 
 
2c612e3
 
33aa732
 
2c612e3
33aa732
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: id
    dtype: string
  - name: content
    dtype: string
  - name: score
    dtype: int64
  - name: date_utc
    dtype: timestamp[ns]
  - name: title
    dtype: string
  - name: flair
    dtype: string
  - name: poster
    dtype: string
  - name: permalink
    dtype: string
  - name: nsfw
    dtype: bool
  - name: embedding
    sequence: float64
  splits:
  - name: train
    num_bytes: 180265339
    num_examples: 13118
  download_size: 133176404
  dataset_size: 180265339
tags:
- not-for-all-audiences
---
# Dataset Card for "reddit-bestofredditorupdates-processed"

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

--- Generated Part of README Below ---


## Dataset Overview
This dataset is based on [derek-thomas/dataset-creator-reddit-bestofredditorupdates](https://huggingface.co/datasets/derek-thomas/dataset-creator-reddit-bestofredditorupdates) 
and will add [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) embeddings based on the
`content` field.

The goal is to be able to have an automatic and free semantic/neural tool for any subreddit.

The last run was on 2024-11-03 05:00:00 UTC+0000 and updated 12 new rows.

## Creation Details
This is done by triggering [derek-thomas/processing-bestofredditorupdates](https://huggingface.co/spaces/derek-thomas/processing-bestofredditorupdates) 
based on a repository update [webhook](https://huggingface.co/docs/hub/en/webhooks) to calculate the embeddings and update the [nomic atlas](https://docs.nomic.ai) 
visualization. This is done by this [processing space](https://huggingface.co/spaces/derek-thomas/processing-bestofredditorupdates).

## Update Frequency
The dataset is updated based on a [webhook](https://huggingface.co/docs/hub/en/webhooks) trigger, so each time [derek-thomas/dataset-creator-reddit-bestofredditorupdates](https://huggingface.co/datasets/derek-thomas/dataset-creator-reddit-bestofredditorupdates)
is updated, this dataset will be updated. 

## Opt-out
To opt-out of this dataset please make a request in the community tab