id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
1,601,597,185
Turn /parquet-and-dataset-info into a config-level job
We should: - compute the values for every config independently: `config--parquet-and-dataset-info` - no need to compute things at the dataset level See #735
Turn /parquet-and-dataset-info into a config-level job: We should: - compute the values for every config independently: `config--parquet-and-dataset-info` - no need to compute things at the dataset level See #735
closed
2023-02-27T17:12:18Z
2023-05-15T08:18:19Z
2023-05-15T08:18:19Z
severo
1,601,595,175
Turn /parquet into a config-level job
We should: - compute the values for every config independently: `config--parquet` - compute the dataset-level response `dataset--parquet` each time a `config--parquet` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /parquet, depending on the inputs (see https://github.com/huggingface/datasets-server/pull/834) See #735.
Turn /parquet into a config-level job: We should: - compute the values for every config independently: `config--parquet` - compute the dataset-level response `dataset--parquet` each time a `config--parquet` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /parquet, depending on the inputs (see https://github.com/huggingface/datasets-server/pull/834) See #735.
closed
2023-02-27T17:11:15Z
2023-03-15T12:07:25Z
2023-03-15T11:03:49Z
severo
1,601,593,151
Turn /dataset-info into a config-level job
We should: - compute the values for every config independently: `config--dataset-info` - compute the dataset-level response `dataset--dataset-info` each time a `config--dataset-info` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /dataset-info, depending on the inputs (see https://github.com/huggingface/datasets-server/pull/834) See #735.
Turn /dataset-info into a config-level job: We should: - compute the values for every config independently: `config--dataset-info` - compute the dataset-level response `dataset--dataset-info` each time a `config--dataset-info` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /dataset-info, depending on the inputs (see https://github.com/huggingface/datasets-server/pull/834) See #735.
closed
2023-02-27T17:10:08Z
2023-03-24T10:46:57Z
2023-03-24T10:46:57Z
severo
1,601,470,902
Turn /sizes into a config-level job
We should: - compute the values for every config independently: `config--sizes` - compute the dataset-level response `dataset--sizes` each time a `config--sizes` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /sizes, depending on the inputs (see https://github.com/huggingface/datasets-server/pull/834) See #735.
Turn /sizes into a config-level job: We should: - compute the values for every config independently: `config--sizes` - compute the dataset-level response `dataset--sizes` each time a `config--sizes` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /sizes, depending on the inputs (see https://github.com/huggingface/datasets-server/pull/834) See #735.
closed
2023-02-27T16:01:43Z
2023-03-14T15:36:13Z
2023-03-14T15:36:13Z
severo
1,601,357,867
Ensure the dates stored in mongo (jobs, cache) are localized
See https://github.com/huggingface/datasets-server/pull/850#discussion_r1118518118.
Ensure the dates stored in mongo (jobs, cache) are localized: See https://github.com/huggingface/datasets-server/pull/850#discussion_r1118518118.
closed
2023-02-27T14:59:58Z
2024-08-22T09:49:11Z
2024-08-22T09:49:11Z
severo
1,601,220,984
Avoid collisions in unicity_id field in the jobs collection
See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
Avoid collisions in unicity_id field in the jobs collection: See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
closed
2023-02-27T13:47:38Z
2023-04-27T07:22:18Z
2023-04-27T07:22:18Z
severo
1,601,219,803
Unit test the database migration scripts
See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
Unit test the database migration scripts: See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
closed
2023-02-27T13:46:52Z
2023-03-29T18:06:09Z
2023-03-29T18:06:09Z
severo
1,601,216,873
Fix the id field in the migration scripts
See https://github.com/huggingface/datasets-server/pull/825#discussion_r1115923999
Fix the id field in the migration scripts: See https://github.com/huggingface/datasets-server/pull/825#discussion_r1115923999
closed
2023-02-27T13:44:58Z
2023-05-01T15:04:04Z
2023-05-01T15:04:04Z
severo
1,601,119,285
Dataset Viewer issue for UrukHan/t5-russian-summarization
### Link https://huggingface.co/datasets/UrukHan/t5-russian-summarization ### Description The dataset viewer is not working for dataset UrukHan/t5-russian-summarization. Error details: ``` Error code: ClientConnectionError ```
Dataset Viewer issue for UrukHan/t5-russian-summarization: ### Link https://huggingface.co/datasets/UrukHan/t5-russian-summarization ### Description The dataset viewer is not working for dataset UrukHan/t5-russian-summarization. Error details: ``` Error code: ClientConnectionError ```
closed
2023-02-27T12:46:32Z
2023-02-27T12:53:00Z
2023-02-27T12:53:00Z
islombek751
1,601,092,930
Contribute to https://github.com/huggingface/huggingface.js?
https://github.com/huggingface/huggingface.js is a JS client for the Hub and inference. We could propose to add a client for the datasets-server.
Contribute to https://github.com/huggingface/huggingface.js?: https://github.com/huggingface/huggingface.js is a JS client for the Hub and inference. We could propose to add a client for the datasets-server.
closed
2023-02-27T12:27:43Z
2023-04-08T15:04:09Z
2023-04-08T15:04:09Z
severo
1,601,000,591
Dataset Viewer issue for bigscience/P3
### Link https://huggingface.co/datasets/bigscience/P3 ### Description The dataset viewer is not working for dataset bigscience/P3. Error details: ``` Error code: ClientConnectionError ```
Dataset Viewer issue for bigscience/P3: ### Link https://huggingface.co/datasets/bigscience/P3 ### Description The dataset viewer is not working for dataset bigscience/P3. Error details: ``` Error code: ClientConnectionError ```
closed
2023-02-27T11:28:14Z
2023-03-01T12:30:34Z
2023-03-01T12:30:34Z
FangxuLiu
1,600,892,793
Dataset Viewer issue for openai/summarize_from_feedback
### Link https://huggingface.co/datasets/openai/summarize_from_feedback ### Description The dataset viewer is not working for dataset openai/summarize_from_feedback. Error details: ``` Error code: ClientConnectionError ```
Dataset Viewer issue for openai/summarize_from_feedback: ### Link https://huggingface.co/datasets/openai/summarize_from_feedback ### Description The dataset viewer is not working for dataset openai/summarize_from_feedback. Error details: ``` Error code: ClientConnectionError ```
closed
2023-02-27T10:23:18Z
2023-02-27T10:29:46Z
2023-02-27T10:29:45Z
lewtun
1,600,782,797
Turn `get_new_splits` into an abstract method
See https://github.com/huggingface/datasets-server/pull/839#issuecomment-1443264927. cc @AndreaFrancis
Turn `get_new_splits` into an abstract method: See https://github.com/huggingface/datasets-server/pull/839#issuecomment-1443264927. cc @AndreaFrancis
closed
2023-02-27T09:16:40Z
2023-05-10T16:05:12Z
2023-05-10T16:05:12Z
severo
1,600,717,827
Support all the characters in dataset, config and split
For example, a space is an allowed character in a config, while it's not supported in datasets-server. https://discuss.huggingface.co/t/problem-with-dataset-preview-with-audio-files/31475/3?u=severo cc @polinaeterna
Support all the characters in dataset, config and split: For example, a space is an allowed character in a config, while it's not supported in datasets-server. https://discuss.huggingface.co/t/problem-with-dataset-preview-with-audio-files/31475/3?u=severo cc @polinaeterna
closed
2023-02-27T08:32:34Z
2023-06-26T07:34:40Z
2023-06-26T07:34:40Z
severo
1,600,713,870
Store the parquet metadata in their own file?
See https://github.com/huggingface/datasets/issues/5380#issuecomment-1444281177 > From looking at Arrow's source, it seems Parquet stores metadata at the end, which means one needs to iterate over a Parquet file's data before accessing its metadata. We could mimic Dask to address this "limitation" and write metadata in a _metadata/_common_metadata file in to_parquet/push_to_hub, which we could then use to optimize reads (if present). Plus, it's handy that PyArrow can also parse these metadata files.
Store the parquet metadata in their own file?: See https://github.com/huggingface/datasets/issues/5380#issuecomment-1444281177 > From looking at Arrow's source, it seems Parquet stores metadata at the end, which means one needs to iterate over a Parquet file's data before accessing its metadata. We could mimic Dask to address this "limitation" and write metadata in a _metadata/_common_metadata file in to_parquet/push_to_hub, which we could then use to optimize reads (if present). Plus, it's handy that PyArrow can also parse these metadata files.
closed
2023-02-27T08:29:12Z
2023-05-01T15:04:07Z
2023-05-01T15:04:07Z
severo
1,599,353,609
POC: Adding mongo TTL index to Jobs
Will fix https://github.com/huggingface/datasets-server/issues/818 From mongo doc [TTL index](https://www.mongodb.com/docs/manual/core/index-ttl/) : > TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time. Adding index on Job collection to be removed every 7 days = 604800 seconds Also mongo support adding filters for this index e.g If we would like to remove only those SUCCEEDED jobs will be able to do it according to https://www.mongodb.com/docs/manual/core/index-partial/ Consider that by default mongod TTL monitor service runs every 60 seconds but we can modify that configuration that is, all jobs which finished_at value are less that current_time - QUEUE_TTL_SECONDS will be removed every 60 secods by default. Those documents without a value in the field "finished_at" are not removed.
POC: Adding mongo TTL index to Jobs: Will fix https://github.com/huggingface/datasets-server/issues/818 From mongo doc [TTL index](https://www.mongodb.com/docs/manual/core/index-ttl/) : > TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time. Adding index on Job collection to be removed every 7 days = 604800 seconds Also mongo support adding filters for this index e.g If we would like to remove only those SUCCEEDED jobs will be able to do it according to https://www.mongodb.com/docs/manual/core/index-partial/ Consider that by default mongod TTL monitor service runs every 60 seconds but we can modify that configuration that is, all jobs which finished_at value are less that current_time - QUEUE_TTL_SECONDS will be removed every 60 secods by default. Those documents without a value in the field "finished_at" are not removed.
closed
2023-02-24T22:33:59Z
2023-02-27T18:54:09Z
2023-02-27T18:51:08Z
AndreaFrancis
1,599,176,975
Reorganize executor
Took @severo 's comments from https://github.com/huggingface/datasets-server/pull/827 and https://github.com/huggingface/datasets-server/pull/824 In particular: - I moved the queue related stuff to queue.py - I moved all the job runner related stuff to job_runner.py - I shortened the env variable names - I added Job.info() to avoid repeated the same error-prone dict creation over and over - I added tests for each part that I moved
Reorganize executor: Took @severo 's comments from https://github.com/huggingface/datasets-server/pull/827 and https://github.com/huggingface/datasets-server/pull/824 In particular: - I moved the queue related stuff to queue.py - I moved all the job runner related stuff to job_runner.py - I shortened the env variable names - I added Job.info() to avoid repeated the same error-prone dict creation over and over - I added tests for each part that I moved
closed
2023-02-24T19:29:09Z
2023-02-28T09:51:44Z
2023-02-28T09:49:13Z
lhoestq
1,598,844,039
Serve openapi.json from the docs
Currently https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json is a file in chart; every change in it should trigger a new version of the Chart. But, semantically, it does not belong to the chart and should be part of the <strike>API service</strike> docs.
Serve openapi.json from the docs: Currently https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json is a file in chart; every change in it should trigger a new version of the Chart. But, semantically, it does not belong to the chart and should be part of the <strike>API service</strike> docs.
closed
2023-02-24T15:24:29Z
2024-06-19T14:03:22Z
2024-06-19T14:03:22Z
severo
1,598,805,446
Setup argocd action
null
Setup argocd action:
closed
2023-02-24T15:06:05Z
2023-02-24T15:35:24Z
2023-02-24T15:32:32Z
severo
1,598,639,547
CI is failing due to vulnerability in markdown-it-py 2.1.0
Vulnerabilities: https://github.com/huggingface/datasets-server/actions/runs/4262829184/jobs/7418801386 ``` Found 2 known vulnerabilities in 1 package Name Version ID Fix Versions -------------- ------- ------------------- ------------ markdown-it-py 2.1.0 GHSA-jrwr-5x3p-hvc3 2.2.0 markdown-it-py 2.1.0 GHSA-vrjv-mxr7-vjf8 2.2.0 ```
CI is failing due to vulnerability in markdown-it-py 2.1.0: Vulnerabilities: https://github.com/huggingface/datasets-server/actions/runs/4262829184/jobs/7418801386 ``` Found 2 known vulnerabilities in 1 package Name Version ID Fix Versions -------------- ------- ------------------- ------------ markdown-it-py 2.1.0 GHSA-jrwr-5x3p-hvc3 2.2.0 markdown-it-py 2.1.0 GHSA-vrjv-mxr7-vjf8 2.2.0 ```
closed
2023-02-24T13:32:13Z
2023-02-24T13:55:21Z
2023-02-24T13:55:21Z
albertvillanova
1,598,584,025
[once in hfh] improve /parquet-and-dataset-info job runner
Once https://github.com/huggingface/huggingface_hub/pull/1331 is released, upgrade huggingface_hub and rework how we create the refs/convert/parquet branch in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/parquet_and_dataset_info.py#L805 to have an empty history.
[once in hfh] improve /parquet-and-dataset-info job runner: Once https://github.com/huggingface/huggingface_hub/pull/1331 is released, upgrade huggingface_hub and rework how we create the refs/convert/parquet branch in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/parquet_and_dataset_info.py#L805 to have an empty history.
closed
2023-02-24T12:58:52Z
2023-04-11T14:14:41Z
2023-04-11T14:14:41Z
severo
1,598,448,053
chore: 🤖 upgrade dependencies to fix vulnerability
replaces #838
chore: 🤖 upgrade dependencies to fix vulnerability: replaces #838
closed
2023-02-24T11:23:22Z
2023-02-24T13:57:53Z
2023-02-24T13:55:20Z
severo
1,598,324,097
Some datasets on the hub have a broken refs/convert/parquet
See https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/tree/refs%2Fconvert%2Fparquet <img width="1074" alt="Capture d’écran 2023-02-24 à 11 11 35" src="https://user-images.githubusercontent.com/1676121/221152375-15b2a4a6-0017-4086-8824-88906a161478.png"> It seems like the branch has been created from `main`, but then no parquet files have been sent. It should be removed.
Some datasets on the hub have a broken refs/convert/parquet: See https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/tree/refs%2Fconvert%2Fparquet <img width="1074" alt="Capture d’écran 2023-02-24 à 11 11 35" src="https://user-images.githubusercontent.com/1676121/221152375-15b2a4a6-0017-4086-8824-88906a161478.png"> It seems like the branch has been created from `main`, but then no parquet files have been sent. It should be removed.
closed
2023-02-24T10:12:09Z
2023-04-13T15:04:18Z
2023-04-13T15:04:18Z
severo
1,598,233,383
Add pdb to avoid disruption when we update kubernetes
null
Add pdb to avoid disruption when we update kubernetes:
closed
2023-02-24T09:13:09Z
2023-02-24T10:23:54Z
2023-02-24T10:21:21Z
XciD
1,598,173,782
Dataset Viewer issue for bridgeconn/snow-mountain
### Link https://huggingface.co/datasets/bridgeconn/snow-mountain ### Description The dataset viewer is not working for dataset bridgeconn/snow-mountain. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for bridgeconn/snow-mountain: ### Link https://huggingface.co/datasets/bridgeconn/snow-mountain ### Description The dataset viewer is not working for dataset bridgeconn/snow-mountain. Error details: ``` Error code: ResponseNotReady ```
closed
2023-02-24T08:34:01Z
2023-02-28T08:28:54Z
2023-02-28T08:28:54Z
anjalyjayakrishnan
1,597,557,654
Dataset Viewer issue for artem9k/ai-text-detection-pile
### Link https://huggingface.co/datasets/artem9k/ai-text-detection-pile ### Description The dataset viewer is not working for dataset artem9k/ai-text-detection-pile. I am trying to load a jsonl file Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for artem9k/ai-text-detection-pile: ### Link https://huggingface.co/datasets/artem9k/ai-text-detection-pile ### Description The dataset viewer is not working for dataset artem9k/ai-text-detection-pile. I am trying to load a jsonl file Error details: ``` Error code: ResponseNotReady ```
closed
2023-02-23T21:33:10Z
2023-02-28T08:29:32Z
2023-02-28T08:29:31Z
sumo43
1,597,547,115
Dataset Viewer issue for birgermoell/synthetic_compassion
### Link https://huggingface.co/datasets/birgermoell/synthetic_compassion ### Description The dataset viewer is not working for dataset birgermoell/synthetic_compassion. Error details: ``` Error code: ClientConnectionError ``` I'm getting this error when creating my new dataset. I have a metadata.csv file in the following format and a data folder with mp3 files named 1.mp3 2.mp3 and so on. The only thing I can think of is that the order of the files aren't 1.2.3 in the metadata.csv file but I can't see why that should matter. <img width="677" alt="Screenshot 2023-02-23 at 22 21 26" src="https://user-images.githubusercontent.com/1704131/221032957-2fc67bd4-5b96-4aa0-bf63-824ffede9d98.png">
Dataset Viewer issue for birgermoell/synthetic_compassion: ### Link https://huggingface.co/datasets/birgermoell/synthetic_compassion ### Description The dataset viewer is not working for dataset birgermoell/synthetic_compassion. Error details: ``` Error code: ClientConnectionError ``` I'm getting this error when creating my new dataset. I have a metadata.csv file in the following format and a data folder with mp3 files named 1.mp3 2.mp3 and so on. The only thing I can think of is that the order of the files aren't 1.2.3 in the metadata.csv file but I can't see why that should matter. <img width="677" alt="Screenshot 2023-02-23 at 22 21 26" src="https://user-images.githubusercontent.com/1704131/221032957-2fc67bd4-5b96-4aa0-bf63-824ffede9d98.png">
closed
2023-02-23T21:22:32Z
2023-03-01T12:30:49Z
2023-03-01T12:30:48Z
BirgerMoell
1,597,542,036
Adding missing function on split-names-from-dataset-info worker
`get_new_splits` function is used when creating children jobs, this function was missing in the new worker `split-names-from-dataset-info`
Adding missing function on split-names-from-dataset-info worker: `get_new_splits` function is used when creating children jobs, this function was missing in the new worker `split-names-from-dataset-info`
closed
2023-02-23T21:17:40Z
2023-02-27T09:17:08Z
2023-02-24T20:30:39Z
AndreaFrancis
1,597,447,850
chore(deps): bump markdown-it-py from 2.1.0 to 2.2.0 in /front/admin_ui
Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 2.1.0 to 2.2.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/executablebooks/markdown-it-py/releases">markdown-it-py's releases</a>.</em></p> <blockquote> <h2>v2.2.0</h2> <h2>What's Changed</h2> <ul> <li>⬆️ UPGRADE: Allow linkify-it-py v2 by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/218">#218</a></li> <li>🐛 FIX: CVE-2023-26303 by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/246">#246</a></li> <li>🐛 FIX: CLI crash on non-utf8 character by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/247">#247</a></li> <li>📚 DOCS: Update the example by <a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> <li>📚 DOCS: Add section about markdown renderer by <a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li>🔧 Create SECURITY.md by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/248">#248</a></li> <li>🔧 MAINTAIN: Update mypy's additional dependencies by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/217">#217</a></li> <li>Fix typo by <a href="https://github.com/jwilk"><code>@​jwilk</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li>🔧 Bump GH actions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/244">#244</a></li> <li>🔧 Update benchmark pkg versions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/245">#245</a></li> </ul> <h2>New Contributors</h2> <p>Thanks to 🎉</p> <ul> <li><a href="https://github.com/jwilk"><code>@​jwilk</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li><a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li><a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0">https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0</a></p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/executablebooks/markdown-it-py/blob/master/CHANGELOG.md">markdown-it-py's changelog</a>.</em></p> <blockquote> <h2>2.2.0 - 2023-02-22</h2> <h3>What's Changed</h3> <ul> <li>⬆️ UPGRADE: Allow linkify-it-py v2 by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/218">#218</a></li> <li>🐛 FIX: CVE-2023-26303 by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/246">#246</a></li> <li>🐛 FIX: CLI crash on non-utf8 character by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/247">#247</a></li> <li>📚 DOCS: Update the example by <a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> <li>📚 DOCS: Add section about markdown renderer by <a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li>🔧 Create SECURITY.md by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/248">#248</a></li> <li>🔧 MAINTAIN: Update mypy's additional dependencies by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/217">#217</a></li> <li>Fix typo by <a href="https://github.com/jwilk"><code>@​jwilk</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li>🔧 Bump GH actions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/244">#244</a></li> <li>🔧 Update benchmark pkg versions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/245">#245</a></li> </ul> <h3>New Contributors</h3> <p>Thanks to 🎉</p> <ul> <li><a href="https://github.com/jwilk"><code>@​jwilk</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li><a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li><a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0">https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/73a01479212bfe2aea0b995b4d13c8ddca2e4265"><code>73a0147</code></a> 🚀 RELEASE: v2.2.0 (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/250">#250</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/53ca3e9c2b9e9b295f6abf7f4ad2730a9b70f68c"><code>53ca3e9</code></a> 🐛 FIX: CLI crash on non-utf8 character (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/247">#247</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/6491bc2491a07a8072e5d40f27eab6430585c42c"><code>6491bc2</code></a> 📚 Add email to security section (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/249">#249</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/cf8b2d8c561233d4c18c55e80b68c8d06850fda6"><code>cf8b2d8</code></a> 🔧 Create SECURITY.md (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/248">#248</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/ae03c6107dfa18e648f6fdd1280f5b89092d5d49"><code>ae03c61</code></a> 🐛 FIX: CVE-2023-26303 (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/246">#246</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/2c93e0b6a8aec7e5a6e1bdef502de7d95ec2a192"><code>2c93e0b</code></a> 📚 DOCS: Update the example (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/229">#229</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/4670f0cdd7a9e8ab7523f51b0beb1d4ea27bb1b7"><code>4670f0c</code></a> ⬆️ UPGRADE: Allow linkify-it-py v2 (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/218">#218</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/032c742671c4d6ad12ecd5cd072164e2f3812c12"><code>032c742</code></a> 📚 DOCS: Add section about markdown renderer (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/227">#227</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/c9f6856dcc3f5f73ce01571dd280d6139b0c1185"><code>c9f6856</code></a> 🔧 Update benchmark pkg versions (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/245">#245</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/2160a3daec2a7a007e5a7f5f941eaaad001a2d95"><code>2160a3d</code></a> 🔧 Bump GH actions (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/244">#244</a>)</li> <li>Additional commits viewable in <a href="https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=markdown-it-py&package-manager=pip&previous-version=2.1.0&new-version=2.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
chore(deps): bump markdown-it-py from 2.1.0 to 2.2.0 in /front/admin_ui: Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 2.1.0 to 2.2.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/executablebooks/markdown-it-py/releases">markdown-it-py's releases</a>.</em></p> <blockquote> <h2>v2.2.0</h2> <h2>What's Changed</h2> <ul> <li>⬆️ UPGRADE: Allow linkify-it-py v2 by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/218">#218</a></li> <li>🐛 FIX: CVE-2023-26303 by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/246">#246</a></li> <li>🐛 FIX: CLI crash on non-utf8 character by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/247">#247</a></li> <li>📚 DOCS: Update the example by <a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> <li>📚 DOCS: Add section about markdown renderer by <a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li>🔧 Create SECURITY.md by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/248">#248</a></li> <li>🔧 MAINTAIN: Update mypy's additional dependencies by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/217">#217</a></li> <li>Fix typo by <a href="https://github.com/jwilk"><code>@​jwilk</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li>🔧 Bump GH actions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/244">#244</a></li> <li>🔧 Update benchmark pkg versions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/245">#245</a></li> </ul> <h2>New Contributors</h2> <p>Thanks to 🎉</p> <ul> <li><a href="https://github.com/jwilk"><code>@​jwilk</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li><a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li><a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0">https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0</a></p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/executablebooks/markdown-it-py/blob/master/CHANGELOG.md">markdown-it-py's changelog</a>.</em></p> <blockquote> <h2>2.2.0 - 2023-02-22</h2> <h3>What's Changed</h3> <ul> <li>⬆️ UPGRADE: Allow linkify-it-py v2 by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/218">#218</a></li> <li>🐛 FIX: CVE-2023-26303 by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/246">#246</a></li> <li>🐛 FIX: CLI crash on non-utf8 character by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/247">#247</a></li> <li>📚 DOCS: Update the example by <a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> <li>📚 DOCS: Add section about markdown renderer by <a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li>🔧 Create SECURITY.md by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/248">#248</a></li> <li>🔧 MAINTAIN: Update mypy's additional dependencies by <a href="https://github.com/hukkin"><code>@​hukkin</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/217">#217</a></li> <li>Fix typo by <a href="https://github.com/jwilk"><code>@​jwilk</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li>🔧 Bump GH actions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/244">#244</a></li> <li>🔧 Update benchmark pkg versions by <a href="https://github.com/chrisjsewell"><code>@​chrisjsewell</code></a> in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/245">#245</a></li> </ul> <h3>New Contributors</h3> <p>Thanks to 🎉</p> <ul> <li><a href="https://github.com/jwilk"><code>@​jwilk</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/230">#230</a></li> <li><a href="https://github.com/holamgadol"><code>@​holamgadol</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/227">#227</a></li> <li><a href="https://github.com/redstoneleo"><code>@​redstoneleo</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/pull/229">#229</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0">https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/73a01479212bfe2aea0b995b4d13c8ddca2e4265"><code>73a0147</code></a> 🚀 RELEASE: v2.2.0 (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/250">#250</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/53ca3e9c2b9e9b295f6abf7f4ad2730a9b70f68c"><code>53ca3e9</code></a> 🐛 FIX: CLI crash on non-utf8 character (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/247">#247</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/6491bc2491a07a8072e5d40f27eab6430585c42c"><code>6491bc2</code></a> 📚 Add email to security section (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/249">#249</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/cf8b2d8c561233d4c18c55e80b68c8d06850fda6"><code>cf8b2d8</code></a> 🔧 Create SECURITY.md (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/248">#248</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/ae03c6107dfa18e648f6fdd1280f5b89092d5d49"><code>ae03c61</code></a> 🐛 FIX: CVE-2023-26303 (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/246">#246</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/2c93e0b6a8aec7e5a6e1bdef502de7d95ec2a192"><code>2c93e0b</code></a> 📚 DOCS: Update the example (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/229">#229</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/4670f0cdd7a9e8ab7523f51b0beb1d4ea27bb1b7"><code>4670f0c</code></a> ⬆️ UPGRADE: Allow linkify-it-py v2 (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/218">#218</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/032c742671c4d6ad12ecd5cd072164e2f3812c12"><code>032c742</code></a> 📚 DOCS: Add section about markdown renderer (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/227">#227</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/c9f6856dcc3f5f73ce01571dd280d6139b0c1185"><code>c9f6856</code></a> 🔧 Update benchmark pkg versions (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/245">#245</a>)</li> <li><a href="https://github.com/executablebooks/markdown-it-py/commit/2160a3daec2a7a007e5a7f5f941eaaad001a2d95"><code>2160a3d</code></a> 🔧 Bump GH actions (<a href="https://github-redirect.dependabot.com/executablebooks/markdown-it-py/issues/244">#244</a>)</li> <li>Additional commits viewable in <a href="https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=markdown-it-py&package-manager=pip&previous-version=2.1.0&new-version=2.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
closed
2023-02-23T19:55:42Z
2023-02-24T12:53:30Z
2023-02-24T12:53:19Z
dependabot[bot]
1,597,359,687
Fix typos in split-names-from-streaming
null
Fix typos in split-names-from-streaming:
closed
2023-02-23T18:45:40Z
2023-02-27T16:22:09Z
2023-02-27T16:19:24Z
AndreaFrancis
1,597,200,392
Shoud we run the complete CI on push to main?
Currently we only build the docker images when merging a PR to `main`, and we have to rely on the PR CI to be all green. Maybe we could launch everything when we merge into `main`, just to be sure, and to keep a trace of possible issues.
Shoud we run the complete CI on push to main?: Currently we only build the docker images when merging a PR to `main`, and we have to rely on the PR CI to be all green. Maybe we could launch everything when we merge into `main`, just to be sure, and to keep a trace of possible issues.
closed
2023-02-23T16:51:11Z
2023-08-07T15:58:48Z
2023-08-05T15:03:58Z
severo
1,596,911,335
Move `required_by_dataset_viewer` to services/api
See https://github.com/huggingface/datasets-server/pull/817#issuecomment-1431477692
Move `required_by_dataset_viewer` to services/api: See https://github.com/huggingface/datasets-server/pull/817#issuecomment-1431477692
closed
2023-02-23T13:51:55Z
2023-06-14T12:15:06Z
2023-06-14T12:15:05Z
severo
1,595,827,930
Endpoint respond by input type
Part of https://github.com/huggingface/datasets-server/issues/755 This change will allow endpoints respond based on input type having a list of processing steps e.g: - /splits with dataset param will reach out to /splits cache kind - /splits with config param will reach out to /splits-from-streaming first and then /splits-form-dataset-info
Endpoint respond by input type: Part of https://github.com/huggingface/datasets-server/issues/755 This change will allow endpoints respond based on input type having a list of processing steps e.g: - /splits with dataset param will reach out to /splits cache kind - /splits with config param will reach out to /splits-from-streaming first and then /splits-form-dataset-info
closed
2023-02-22T21:04:03Z
2023-03-01T14:40:18Z
2023-03-01T14:37:44Z
AndreaFrancis
1,595,323,349
Lower parquet row group size for image datasets
REQUIRES test_get_writer_batch_size to be merged, and to update the `datasets` version to use this feature. This should help optimize random access to parquet files for https://github.com/huggingface/datasets-server/pull/687/files
Lower parquet row group size for image datasets: REQUIRES test_get_writer_batch_size to be merged, and to update the `datasets` version to use this feature. This should help optimize random access to parquet files for https://github.com/huggingface/datasets-server/pull/687/files
closed
2023-02-22T15:34:07Z
2023-04-21T14:12:40Z
2023-04-21T14:09:52Z
lhoestq
1,595,294,878
Fix CI mypy after datasets 2.10.0 release
Fix #831.
Fix CI mypy after datasets 2.10.0 release: Fix #831.
closed
2023-02-22T15:16:56Z
2023-02-22T15:36:03Z
2023-02-22T15:33:00Z
albertvillanova
1,595,289,463
CI is broken after datasets 2.10.0 release
After updating `datasets` dependency to 2.10.0, the CI is broken: ``` error: Skipping analyzing "datasets": module is installed, but missing library stubs or py.typed marker ``` See: https://github.com/huggingface/datasets-server/actions/runs/4243203372/jobs/7375645159
CI is broken after datasets 2.10.0 release: After updating `datasets` dependency to 2.10.0, the CI is broken: ``` error: Skipping analyzing "datasets": module is installed, but missing library stubs or py.typed marker ``` See: https://github.com/huggingface/datasets-server/actions/runs/4243203372/jobs/7375645159
closed
2023-02-22T15:13:44Z
2023-02-22T15:33:15Z
2023-02-22T15:33:15Z
albertvillanova
1,595,123,903
Add `worker_version` to queued `Job`
This can be useful when e.g. killing a zombie job because right now we assume that the current worker version is the one the worker was using when the job started. Then we can properly kill zombie jobs that come from an older worker version. See https://github.com/huggingface/datasets-server/pull/827#discussion_r1114260946
Add `worker_version` to queued `Job`: This can be useful when e.g. killing a zombie job because right now we assume that the current worker version is the one the worker was using when the job started. Then we can properly kill zombie jobs that come from an older worker version. See https://github.com/huggingface/datasets-server/pull/827#discussion_r1114260946
closed
2023-02-22T13:39:54Z
2023-04-03T09:19:19Z
2023-04-02T15:03:47Z
lhoestq
1,595,108,195
Update datasets dependency to 2.10.0 version
Close #828.
Update datasets dependency to 2.10.0 version: Close #828.
closed
2023-02-22T13:28:59Z
2023-02-24T14:43:44Z
2023-02-24T14:40:51Z
albertvillanova
1,595,098,486
Update datasets to 2.10.0
After 2.10.0 `datasets` release, update dependencies on it.
Update datasets to 2.10.0: After 2.10.0 `datasets` release, update dependencies on it.
closed
2023-02-22T13:22:19Z
2023-02-24T14:40:53Z
2023-02-24T14:40:53Z
albertvillanova
1,592,084,545
Set error response when zombie job is killed
Last step for https://github.com/huggingface/datasets-server/issues/741 regarding zombies
Set error response when zombie job is killed: Last step for https://github.com/huggingface/datasets-server/issues/741 regarding zombies
closed
2023-02-20T15:40:57Z
2023-02-24T17:38:06Z
2023-02-22T13:59:50Z
lhoestq
1,589,688,709
Kill zombies
I defined zombies as started jobs with a `last_heartbeat` that is older than `max_missing_heartbeats * heartbeat_time_interval_seconds` Then I added `kill_zombies` to the worker executor. It runs every `kill_zombies_time_interval_seconds` seconds and set the zombie jobs status to ERROR. Given that heartbeats happen every 60s, I defined: - max_missing_heartbeats = 5 - kill_zombies_time_interval_seconds = 10min ## Implementations details To keep full control on the time interval for every action the executor runs, I switched to `asyncio` close https://github.com/huggingface/datasets-server/issues/741
Kill zombies: I defined zombies as started jobs with a `last_heartbeat` that is older than `max_missing_heartbeats * heartbeat_time_interval_seconds` Then I added `kill_zombies` to the worker executor. It runs every `kill_zombies_time_interval_seconds` seconds and set the zombie jobs status to ERROR. Given that heartbeats happen every 60s, I defined: - max_missing_heartbeats = 5 - kill_zombies_time_interval_seconds = 10min ## Implementations details To keep full control on the time interval for every action the executor runs, I switched to `asyncio` close https://github.com/huggingface/datasets-server/issues/741
closed
2023-02-17T17:03:15Z
2023-02-23T18:21:59Z
2023-02-17T18:15:05Z
lhoestq
1,588,325,083
Renaming split-names to split-names-from-streaming
Part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to handle two different sources for split-names, the previous worker/processing step/cache-kind have to be changed to /split-names-from-streaming (There already exist /split-names-from-dataset-info) TODO: Once this PR is merged, a new PR with changes on docker image/chart should be sent in order to change the WORKER_ONLY_JOB_TYPES param as well
Renaming split-names to split-names-from-streaming: Part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to handle two different sources for split-names, the previous worker/processing step/cache-kind have to be changed to /split-names-from-streaming (There already exist /split-names-from-dataset-info) TODO: Once this PR is merged, a new PR with changes on docker image/chart should be sent in order to change the WORKER_ONLY_JOB_TYPES param as well
closed
2023-02-16T20:33:29Z
2023-02-27T13:45:21Z
2023-02-17T14:33:40Z
AndreaFrancis
1,586,393,904
Add heartbeat
Add heartbeat to workers. It adds a `last_heartbeat` field to documents in the queue. The field is not mandatory - it only appears for jobs that are or were running when a heartbeat happens (once per minute by default). ## Implementations details I added a `WorkerExecutor` that runs the worker loop in a **subprocess**. This way the executor can have its own loop with a heartbeat. The executor knows about the worker state by reading a JSON file when I store the state of the loop. This is helpful to know the ID of the current Job to update its last_heartbeat field. I used filelock to make sure there's no race conditions when reading/writing this file. ## TODO - [x] fix merge conflicts - [x] tests related to https://github.com/huggingface/datasets-server/issues/741
Add heartbeat: Add heartbeat to workers. It adds a `last_heartbeat` field to documents in the queue. The field is not mandatory - it only appears for jobs that are or were running when a heartbeat happens (once per minute by default). ## Implementations details I added a `WorkerExecutor` that runs the worker loop in a **subprocess**. This way the executor can have its own loop with a heartbeat. The executor knows about the worker state by reading a JSON file when I store the state of the loop. This is helpful to know the ID of the current Job to update its last_heartbeat field. I used filelock to make sure there's no race conditions when reading/writing this file. ## TODO - [x] fix merge conflicts - [x] tests related to https://github.com/huggingface/datasets-server/issues/741
closed
2023-02-15T19:07:08Z
2023-02-27T10:50:48Z
2023-02-16T22:54:15Z
lhoestq
1,585,902,949
Update dependencies
This PR upgrades Starlette (vulnerability) (already done by #821 for front/ - this PR also fixes services/admin and services/api) It also upgrades all the dependencies to the next minor version. I checked the important ones: starlette, uvicorn. And nearly all the changes come from the upgrade of mypy to v1 -> I fixed a lot of types, and it exposed some (not critical) bugs.
Update dependencies: This PR upgrades Starlette (vulnerability) (already done by #821 for front/ - this PR also fixes services/admin and services/api) It also upgrades all the dependencies to the next minor version. I checked the important ones: starlette, uvicorn. And nearly all the changes come from the upgrade of mypy to v1 -> I fixed a lot of types, and it exposed some (not critical) bugs.
closed
2023-02-15T13:55:35Z
2023-02-15T15:35:42Z
2023-02-15T15:32:56Z
severo
1,585,442,523
Add /dataset-status to the admin panel
See https://github.com/huggingface/datasets-server/pull/815
Add /dataset-status to the admin panel: See https://github.com/huggingface/datasets-server/pull/815
closed
2023-02-15T08:41:15Z
2023-04-11T11:47:25Z
2023-04-11T11:47:25Z
severo
1,584,978,575
chore(deps): bump starlette from 0.23.1 to 0.25.0 in /front/admin_ui
Bumps [starlette](https://github.com/encode/starlette) from 0.23.1 to 0.25.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p> <blockquote> <h2>Version 0.25.0</h2> <h3>Fixed</h3> <ul> <li>Limit the number of fields and files when parsing <code>multipart/form-data</code> on the <code>MultipartParser</code> <a href="https://github.com/encode/starlette/commit/8c74c2c8dba7030154f8af18e016136bea1938fa">8c74c2c</a> and <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2036">#2036</a>.</li> </ul> <h2>Version 0.24.0</h2> <h3>Added</h3> <ul> <li>Allow <code>StaticFiles</code> to follow symlinks <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1683">#1683</a>.</li> <li>Allow <code>Request.form()</code> as a context manager <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1903">#1903</a>.</li> <li>Add <code>size</code> attribute to <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1405">#1405</a>.</li> <li>Add <code>env_prefix</code> argument to <code>Config</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1990">#1990</a>.</li> <li>Add template context processors <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1904">#1904</a>.</li> <li>Support <code>str</code> and <code>datetime</code> on <code>expires</code> parameter on the <code>Response.set_cookie</code> method <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1908">#1908</a>.</li> </ul> <h3>Changed</h3> <ul> <li>Lazily build the middleware stack <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2017">#2017</a>.</li> <li>Make the <code>file</code> argument required on <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1413">#1413</a>.</li> <li>Use debug extension instead of custom response template extension <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1991">#1991</a>.</li> </ul> <h3>Fixed</h3> <ul> <li>Fix url parsing of ipv6 urls on <code>URL.replace</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1965">#1965</a>.</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/encode/starlette/blob/master/docs/release-notes.md">starlette's changelog</a>.</em></p> <blockquote> <h2>0.25.0</h2> <p>February 14, 2023</p> <h3>Fix</h3> <ul> <li>Limit the number of fields and files when parsing <code>multipart/form-data</code> on the <code>MultipartParser</code> <a href="https://github.com/encode/starlette/commit/8c74c2c8dba7030154f8af18e016136bea1938fa">8c74c2c</a> and <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2036">#2036</a>.</li> </ul> <h2>0.24.0</h2> <p>February 6, 2023</p> <h3>Added</h3> <ul> <li>Allow <code>StaticFiles</code> to follow symlinks <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1683">#1683</a>.</li> <li>Allow <code>Request.form()</code> as a context manager <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1903">#1903</a>.</li> <li>Add <code>size</code> attribute to <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1405">#1405</a>.</li> <li>Add <code>env_prefix</code> argument to <code>Config</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1990">#1990</a>.</li> <li>Add template context processors <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1904">#1904</a>.</li> <li>Support <code>str</code> and <code>datetime</code> on <code>expires</code> parameter on the <code>Response.set_cookie</code> method <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1908">#1908</a>.</li> </ul> <h3>Changed</h3> <ul> <li>Lazily build the middleware stack <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2017">#2017</a>.</li> <li>Make the <code>file</code> argument required on <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1413">#1413</a>.</li> <li>Use debug extension instead of custom response template extension <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1991">#1991</a>.</li> </ul> <h3>Fixed</h3> <ul> <li>Fix url parsing of ipv6 urls on <code>URL.replace</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1965">#1965</a>.</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/encode/starlette/commit/fc480890fe1f1e421746de303c6f8da1323e5626"><code>fc48089</code></a> Version 0.25.0 (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2035">#2035</a>)</li> <li><a href="https://github.com/encode/starlette/commit/bb4d8f9d685f7cff1e7e19b70c5b56652264d9c5"><code>bb4d8f9</code></a> 🐛 Close all the multipart files on error (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2036">#2036</a>)</li> <li><a href="https://github.com/encode/starlette/commit/8c74c2c8dba7030154f8af18e016136bea1938fa"><code>8c74c2c</code></a> Merge pull request from GHSA-74m5-2c7w-9w3x</li> <li><a href="https://github.com/encode/starlette/commit/5771a78c14d577bd1743ed2e948c91589dd30e20"><code>5771a78</code></a> Fix test not passing in 32-bit architectures (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2033">#2033</a>)</li> <li><a href="https://github.com/encode/starlette/commit/337ae243b10d3c20db53b77bc3c7718bb4f1a164"><code>337ae24</code></a> Document that UploadFile's <code>filename</code> and <code>content_type</code> can be <code>None</code> (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2029">#2029</a>)</li> <li><a href="https://github.com/encode/starlette/commit/218a6b4f98a2ff5fe3f6076b401d5c24c3021943"><code>218a6b4</code></a> Version 0.24.0 (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1983">#1983</a>)</li> <li><a href="https://github.com/encode/starlette/commit/e05b632c568cd075bc193f11bd4c806f6ef10672"><code>e05b632</code></a> Feature: Add size attribute to UploadFile (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1405">#1405</a>)</li> <li><a href="https://github.com/encode/starlette/commit/c568b55dff8be94b9c917e186e512ab53d7310e1"><code>c568b55</code></a> allow using Request.form() as a context manager (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1903">#1903</a>)</li> <li><a href="https://github.com/encode/starlette/commit/0a63a6e586ababc932d57e7751187d4ff2d7ca18"><code>0a63a6e</code></a> Support <code>str</code> and <code>datetime</code> on <code>expires</code> parameter on the <code>set_cookie</code> metho...</li> <li><a href="https://github.com/encode/starlette/commit/94a22b865e7b9828fcc06771e25c9be33364b24b"><code>94a22b8</code></a> Fix url parsing of ipv6 urls on <code>URL.replace</code> (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1965">#1965</a>)</li> <li>Additional commits viewable in <a href="https://github.com/encode/starlette/compare/0.23.1...0.25.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=starlette&package-manager=pip&previous-version=0.23.1&new-version=0.25.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
chore(deps): bump starlette from 0.23.1 to 0.25.0 in /front/admin_ui: Bumps [starlette](https://github.com/encode/starlette) from 0.23.1 to 0.25.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p> <blockquote> <h2>Version 0.25.0</h2> <h3>Fixed</h3> <ul> <li>Limit the number of fields and files when parsing <code>multipart/form-data</code> on the <code>MultipartParser</code> <a href="https://github.com/encode/starlette/commit/8c74c2c8dba7030154f8af18e016136bea1938fa">8c74c2c</a> and <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2036">#2036</a>.</li> </ul> <h2>Version 0.24.0</h2> <h3>Added</h3> <ul> <li>Allow <code>StaticFiles</code> to follow symlinks <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1683">#1683</a>.</li> <li>Allow <code>Request.form()</code> as a context manager <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1903">#1903</a>.</li> <li>Add <code>size</code> attribute to <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1405">#1405</a>.</li> <li>Add <code>env_prefix</code> argument to <code>Config</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1990">#1990</a>.</li> <li>Add template context processors <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1904">#1904</a>.</li> <li>Support <code>str</code> and <code>datetime</code> on <code>expires</code> parameter on the <code>Response.set_cookie</code> method <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1908">#1908</a>.</li> </ul> <h3>Changed</h3> <ul> <li>Lazily build the middleware stack <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2017">#2017</a>.</li> <li>Make the <code>file</code> argument required on <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1413">#1413</a>.</li> <li>Use debug extension instead of custom response template extension <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1991">#1991</a>.</li> </ul> <h3>Fixed</h3> <ul> <li>Fix url parsing of ipv6 urls on <code>URL.replace</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1965">#1965</a>.</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/encode/starlette/blob/master/docs/release-notes.md">starlette's changelog</a>.</em></p> <blockquote> <h2>0.25.0</h2> <p>February 14, 2023</p> <h3>Fix</h3> <ul> <li>Limit the number of fields and files when parsing <code>multipart/form-data</code> on the <code>MultipartParser</code> <a href="https://github.com/encode/starlette/commit/8c74c2c8dba7030154f8af18e016136bea1938fa">8c74c2c</a> and <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2036">#2036</a>.</li> </ul> <h2>0.24.0</h2> <p>February 6, 2023</p> <h3>Added</h3> <ul> <li>Allow <code>StaticFiles</code> to follow symlinks <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1683">#1683</a>.</li> <li>Allow <code>Request.form()</code> as a context manager <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1903">#1903</a>.</li> <li>Add <code>size</code> attribute to <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1405">#1405</a>.</li> <li>Add <code>env_prefix</code> argument to <code>Config</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1990">#1990</a>.</li> <li>Add template context processors <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1904">#1904</a>.</li> <li>Support <code>str</code> and <code>datetime</code> on <code>expires</code> parameter on the <code>Response.set_cookie</code> method <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1908">#1908</a>.</li> </ul> <h3>Changed</h3> <ul> <li>Lazily build the middleware stack <a href="https://github-redirect.dependabot.com/encode/starlette/pull/2017">#2017</a>.</li> <li>Make the <code>file</code> argument required on <code>UploadFile</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1413">#1413</a>.</li> <li>Use debug extension instead of custom response template extension <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1991">#1991</a>.</li> </ul> <h3>Fixed</h3> <ul> <li>Fix url parsing of ipv6 urls on <code>URL.replace</code> <a href="https://github-redirect.dependabot.com/encode/starlette/pull/1965">#1965</a>.</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/encode/starlette/commit/fc480890fe1f1e421746de303c6f8da1323e5626"><code>fc48089</code></a> Version 0.25.0 (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2035">#2035</a>)</li> <li><a href="https://github.com/encode/starlette/commit/bb4d8f9d685f7cff1e7e19b70c5b56652264d9c5"><code>bb4d8f9</code></a> 🐛 Close all the multipart files on error (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2036">#2036</a>)</li> <li><a href="https://github.com/encode/starlette/commit/8c74c2c8dba7030154f8af18e016136bea1938fa"><code>8c74c2c</code></a> Merge pull request from GHSA-74m5-2c7w-9w3x</li> <li><a href="https://github.com/encode/starlette/commit/5771a78c14d577bd1743ed2e948c91589dd30e20"><code>5771a78</code></a> Fix test not passing in 32-bit architectures (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2033">#2033</a>)</li> <li><a href="https://github.com/encode/starlette/commit/337ae243b10d3c20db53b77bc3c7718bb4f1a164"><code>337ae24</code></a> Document that UploadFile's <code>filename</code> and <code>content_type</code> can be <code>None</code> (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/2029">#2029</a>)</li> <li><a href="https://github.com/encode/starlette/commit/218a6b4f98a2ff5fe3f6076b401d5c24c3021943"><code>218a6b4</code></a> Version 0.24.0 (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1983">#1983</a>)</li> <li><a href="https://github.com/encode/starlette/commit/e05b632c568cd075bc193f11bd4c806f6ef10672"><code>e05b632</code></a> Feature: Add size attribute to UploadFile (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1405">#1405</a>)</li> <li><a href="https://github.com/encode/starlette/commit/c568b55dff8be94b9c917e186e512ab53d7310e1"><code>c568b55</code></a> allow using Request.form() as a context manager (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1903">#1903</a>)</li> <li><a href="https://github.com/encode/starlette/commit/0a63a6e586ababc932d57e7751187d4ff2d7ca18"><code>0a63a6e</code></a> Support <code>str</code> and <code>datetime</code> on <code>expires</code> parameter on the <code>set_cookie</code> metho...</li> <li><a href="https://github.com/encode/starlette/commit/94a22b865e7b9828fcc06771e25c9be33364b24b"><code>94a22b8</code></a> Fix url parsing of ipv6 urls on <code>URL.replace</code> (<a href="https://github-redirect.dependabot.com/encode/starlette/issues/1965">#1965</a>)</li> <li>Additional commits viewable in <a href="https://github.com/encode/starlette/compare/0.23.1...0.25.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=starlette&package-manager=pip&previous-version=0.23.1&new-version=0.25.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
closed
2023-02-14T23:30:07Z
2023-02-15T14:00:53Z
2023-02-15T14:00:52Z
dependabot[bot]
1,584,733,391
Adding new job runner for split names based on dataset info cached response
As part of https://github.com/huggingface/datasets-server/issues/755 we will need a new job runner to compute split names from dataset-info response.
Adding new job runner for split names based on dataset info cached response: As part of https://github.com/huggingface/datasets-server/issues/755 we will need a new job runner to compute split names from dataset-info response.
closed
2023-02-14T19:48:50Z
2023-02-15T17:06:17Z
2023-02-15T17:03:33Z
AndreaFrancis
1,584,661,389
Change db migrations from jobs to init containers
From comment https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and the helm update timeouts and fails @severo sugesstion: > Migrate from jobs to init containers to ensure the migration has time to finish without making Helm upgrade timeout
Change db migrations from jobs to init containers: From comment https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and the helm update timeouts and fails @severo sugesstion: > Migrate from jobs to init containers to ensure the migration has time to finish without making Helm upgrade timeout
closed
2023-02-14T18:47:53Z
2023-04-20T15:04:08Z
2023-04-20T15:04:08Z
AndreaFrancis
1,584,659,576
Periodically clean the queue database deleting the old, finished jobs
From comment on https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and the helm update timeouts and fails. @severo suggested: Periodically clean the queue database deleting the old, finished jobs
Periodically clean the queue database deleting the old, finished jobs: From comment on https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and the helm update timeouts and fails. @severo suggested: Periodically clean the queue database deleting the old, finished jobs
closed
2023-02-14T18:46:10Z
2023-02-27T18:51:10Z
2023-02-27T18:51:10Z
AndreaFrancis
1,584,622,342
Separate endpoint from processing step logic
Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind/job_type/name in ProcessingStep - Moved configuration for endpoints to api service, it should not exist in other layers only in api service
Separate endpoint from processing step logic: Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind/job_type/name in ProcessingStep - Moved configuration for endpoints to api service, it should not exist in other layers only in api service
closed
2023-02-14T18:14:49Z
2023-02-23T13:52:29Z
2023-02-17T15:38:41Z
AndreaFrancis
1,584,452,027
Create an orchestrator
## Proposal We should add a new service: orchestrator. On one side, it would receive events: - webhooks when a dataset has changed (added, updated, deleted, changed to gated, etc.) - manual trigger: refresh a dataset, update all the datasets for a specific step, etc. On the other side, it would command the jobs: - create only the jobs that are needed - receive the heartbeat to ensure a job is still running - receive the result of a job and: - store it, - create the dependent jobs based on the result The orchestrator would have access to the queue and to the cache. This means that the job runners can be a lot dumber: - they don't need to check if they should skip the job or not: they just run what they are given - they don't need to create the dependent jobs or even know about the processing graph - they don't have access to the cache (see #751) and to the queue (but the worker still has to access the queue; we could also make the orchestrator launch the jobs on demand instead of having workers that loop and look for jobs in the queue. It's an unrelated issue, though.) See the related issues: #764, #736, #751, #741, #740
Create an orchestrator: ## Proposal We should add a new service: orchestrator. On one side, it would receive events: - webhooks when a dataset has changed (added, updated, deleted, changed to gated, etc.) - manual trigger: refresh a dataset, update all the datasets for a specific step, etc. On the other side, it would command the jobs: - create only the jobs that are needed - receive the heartbeat to ensure a job is still running - receive the result of a job and: - store it, - create the dependent jobs based on the result The orchestrator would have access to the queue and to the cache. This means that the job runners can be a lot dumber: - they don't need to check if they should skip the job or not: they just run what they are given - they don't need to create the dependent jobs or even know about the processing graph - they don't have access to the cache (see #751) and to the queue (but the worker still has to access the queue; we could also make the orchestrator launch the jobs on demand instead of having workers that loop and look for jobs in the queue. It's an unrelated issue, though.) See the related issues: #764, #736, #751, #741, #740
closed
2023-02-14T16:17:44Z
2024-02-02T16:59:23Z
2024-02-02T16:59:23Z
severo
1,584,432,443
feat: 🎸 add a new admin endpoint: /dataset-status
While looking at https://github.com/huggingface/datasets-server/issues/764 and https://github.com/huggingface/datasets-server/issues/736#issuecomment-1412242342, I added a new admin endpoint that gives the current status of a dataset. I think it can help to get insights about a dataset when doing support manually. It takes a `?dataset=` parameter, ie https://dataset-server.huggingface.co/admin/dataset-status?dataset=glue As with all the admin endpoints, it requires being authenticated as part of the huggingface org.
feat: 🎸 add a new admin endpoint: /dataset-status: While looking at https://github.com/huggingface/datasets-server/issues/764 and https://github.com/huggingface/datasets-server/issues/736#issuecomment-1412242342, I added a new admin endpoint that gives the current status of a dataset. I think it can help to get insights about a dataset when doing support manually. It takes a `?dataset=` parameter, ie https://dataset-server.huggingface.co/admin/dataset-status?dataset=glue As with all the admin endpoints, it requires being authenticated as part of the huggingface org.
closed
2023-02-14T16:06:22Z
2023-02-15T08:43:33Z
2023-02-15T08:40:47Z
severo
1,583,861,483
refactor: 💡 factorize the workers templates
null
refactor: 💡 factorize the workers templates:
closed
2023-02-14T10:03:59Z
2023-02-14T16:15:51Z
2023-02-14T16:12:23Z
severo
1,583,840,800
fix: 🐛 add missing config
null
fix: 🐛 add missing config:
closed
2023-02-14T09:51:14Z
2023-02-14T09:54:59Z
2023-02-14T09:52:10Z
severo
1,583,772,932
fix: 🐛 add missing volumes
null
fix: 🐛 add missing volumes:
closed
2023-02-14T09:07:25Z
2023-02-14T09:25:05Z
2023-02-14T09:22:04Z
severo
1,583,671,084
fix: 🐛 ensure all the workers have the same access to the disk
null
fix: 🐛 ensure all the workers have the same access to the disk:
closed
2023-02-14T07:52:39Z
2023-02-14T07:57:38Z
2023-02-14T07:54:56Z
severo
1,583,124,191
WIP- Separate endoint from processing step
Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind in ProcessingStep - Moved configuration for endpoints to api service, it should not exist in other layers
WIP- Separate endoint from processing step: Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind in ProcessingStep - Moved configuration for endpoints to api service, it should not exist in other layers
closed
2023-02-13T21:54:54Z
2023-02-14T18:19:48Z
2023-02-14T17:45:54Z
AndreaFrancis
1,582,859,313
Ignore big datasets from external files
Raise an error for big datasets that use a loading script to download data files external to HF. It was the only case left for a big dataset to not be ignored. Hopefully it makes the `/parquet-and-dataset-info` job stop wasting too much resources on big datasets. Close https://github.com/huggingface/datasets-server/issues/806
Ignore big datasets from external files: Raise an error for big datasets that use a loading script to download data files external to HF. It was the only case left for a big dataset to not be ignored. Hopefully it makes the `/parquet-and-dataset-info` job stop wasting too much resources on big datasets. Close https://github.com/huggingface/datasets-server/issues/806
closed
2023-02-13T18:34:04Z
2023-02-15T14:13:25Z
2023-02-15T14:10:18Z
lhoestq
1,582,609,623
Update chart
depends on #807
Update chart: depends on #807
closed
2023-02-13T15:53:14Z
2023-02-13T18:19:19Z
2023-02-13T18:16:08Z
severo
1,582,587,195
chore: 🤖 add VERSION file
null
chore: 🤖 add VERSION file:
closed
2023-02-13T15:40:21Z
2023-02-13T18:18:16Z
2023-02-13T18:15:35Z
severo
1,582,406,784
Parquet export: ignore ALL datasets bigger than a certain size
Cases to ignore: - [x] the dataset repository is >max_size - [ ] the dataset uses a script that downloads more >max_size of data To fix the second case I think we can pass a custom download manager to the dataset builder `_split_generators` to record the size of the files to donwload. It can also be implemented by checking the number of fiels passed as `gen_kwargs` and checking their sizes. Currently max_size=5GB
Parquet export: ignore ALL datasets bigger than a certain size: Cases to ignore: - [x] the dataset repository is >max_size - [ ] the dataset uses a script that downloads more >max_size of data To fix the second case I think we can pass a custom download manager to the dataset builder `_split_generators` to record the size of the files to donwload. It can also be implemented by checking the number of fiels passed as `gen_kwargs` and checking their sizes. Currently max_size=5GB
closed
2023-02-13T14:01:44Z
2023-02-15T14:10:20Z
2023-02-15T14:10:20Z
lhoestq
1,582,380,227
Rename obsolete mentions to datasets_based
- rename prefix WORKER_LOOP_ to WORKER_ - rename DATASETS_BASED_ENDPOINT to WORKER_ENDPOINT - rename DATASETS_BASED_CONTENT_MAX_BYTES to WORKER_CONTENT_MAX_BYTES - ensure WORKER_STORAGE_PATHS is always used in Helm
Rename obsolete mentions to datasets_based: - rename prefix WORKER_LOOP_ to WORKER_ - rename DATASETS_BASED_ENDPOINT to WORKER_ENDPOINT - rename DATASETS_BASED_CONTENT_MAX_BYTES to WORKER_CONTENT_MAX_BYTES - ensure WORKER_STORAGE_PATHS is always used in Helm
closed
2023-02-13T13:44:22Z
2023-02-13T14:43:27Z
2023-02-13T14:40:10Z
severo
1,580,402,058
Rollback mechanism for /parquet-and-dataset-info
This job creates parquet files in Hub repos. When the admin cancel endpoint is called, we need to rollback the revert the commits that were done in the repo. One way to do it would be to add the job ID in the commit description, this way we can know which commits to revert. Though it may require to patch `datasets` a little to do that
Rollback mechanism for /parquet-and-dataset-info: This job creates parquet files in Hub repos. When the admin cancel endpoint is called, we need to rollback the revert the commits that were done in the repo. One way to do it would be to add the job ID in the commit description, this way we can know which commits to revert. Though it may require to patch `datasets` a little to do that
closed
2023-02-10T22:11:42Z
2023-03-21T15:04:11Z
2023-03-21T15:04:11Z
lhoestq
1,579,363,570
Upgrade dependencies, fix kenlm
null
Upgrade dependencies, fix kenlm:
closed
2023-02-10T09:47:36Z
2023-02-10T16:33:47Z
2023-02-10T16:31:04Z
severo
1,578,218,535
Generic worker
This PR allows a worker to process different job types. We can still dedicate workers to a sublist of jobs using a comma-separated list of the jobs in `WORKER_LOOP_ONLY_JOB_TYPES`. This way, we will be able to reduce the allocated but unused resources. As you can see in chart/env/prod.yaml, I let all the previous workers but added 20 instances of "genericWorker" that can process any job. If it works well, we can reduce the number of specific workers (maybe just /first-rows and /parquet-and-dataset-info) and increase the number of generic ones. Fixes #737.
Generic worker: This PR allows a worker to process different job types. We can still dedicate workers to a sublist of jobs using a comma-separated list of the jobs in `WORKER_LOOP_ONLY_JOB_TYPES`. This way, we will be able to reduce the allocated but unused resources. As you can see in chart/env/prod.yaml, I let all the previous workers but added 20 instances of "genericWorker" that can process any job. If it works well, we can reduce the number of specific workers (maybe just /first-rows and /parquet-and-dataset-info) and increase the number of generic ones. Fixes #737.
closed
2023-02-09T16:31:11Z
2023-02-13T15:32:11Z
2023-02-13T15:29:23Z
severo
1,578,178,282
Add admin ui url
null
Add admin ui url:
closed
2023-02-09T16:05:45Z
2023-02-10T22:15:20Z
2023-02-10T22:12:28Z
lhoestq
1,577,781,329
Move workers/datasets_based to services/worker
Based on #792.
Move workers/datasets_based to services/worker: Based on #792.
closed
2023-02-09T12:13:56Z
2023-02-13T08:33:21Z
2023-02-13T08:30:35Z
severo
1,577,702,774
Use shared action to publish helm chart
null
Use shared action to publish helm chart:
closed
2023-02-09T11:17:50Z
2023-02-09T12:50:28Z
2023-02-09T12:47:36Z
rtrompier
1,577,381,904
Allow to use http instead of https
https://github.com/huggingface/private-hub-package/issues/15
Allow to use http instead of https: https://github.com/huggingface/private-hub-package/issues/15
closed
2023-02-09T07:27:13Z
2023-02-09T08:35:59Z
2023-02-09T08:33:14Z
rtrompier
1,576,899,889
Change split names upstream and source
Partial fix for https://github.com/huggingface/datasets-server/issues/755 - Changing the predecessor of split-names to dataset-info - Changing source of split-names (db instead of dataset lib in streaming mode)
Change split names upstream and source: Partial fix for https://github.com/huggingface/datasets-server/issues/755 - Changing the predecessor of split-names to dataset-info - Changing source of split-names (db instead of dataset lib in streaming mode)
closed
2023-02-08T22:43:45Z
2023-02-10T08:59:23Z
2023-02-09T17:15:27Z
AndreaFrancis
1,576,658,518
Paginate responses
### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should paginate split names, config names, parquet files response having a max size configuration
Paginate responses: ### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should paginate split names, config names, parquet files response having a max size configuration
closed
2023-02-08T19:13:26Z
2023-04-13T15:04:21Z
2023-04-13T15:04:20Z
AndreaFrancis
1,576,655,790
Handle the 16MB limit in MongoDB with a dedicated error
### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should handle the db limitation per document with a dedicated error
Handle the 16MB limit in MongoDB with a dedicated error: ### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should handle the db limitation per document with a dedicated error
closed
2023-02-08T19:11:14Z
2023-08-07T15:56:31Z
2023-08-05T15:04:00Z
AndreaFrancis
1,576,598,432
Delete extracted downloaded files of a dataset
Will close https://github.com/huggingface/datasets-server/issues/753
Delete extracted downloaded files of a dataset: Will close https://github.com/huggingface/datasets-server/issues/753
closed
2023-02-08T18:24:01Z
2023-02-23T14:15:07Z
2023-02-22T18:49:25Z
polinaeterna
1,576,516,385
Basic stats
Started a `/basic-stats` endpoints that computes histogram for numerical data using dask. Not sure if we want to merge this feature right away, I implemented this mostly to trigger some discussions on how to add new data aggregates: does this way sound correct to you ? Feel free to also comment on how we could expand this to include stats for non-numerical columns. How can we add new stats that would be returned in the same endpoint ? You can try it on `__DUMMY_DATASETS_SERVER_USER__/letters` for example: <details> <summary>Results of `/basic-stats` on `__DUMMY_DATASETS_SERVER_USER__/letters`</summary> ```json { "basic_stats": [ { "dataset": "__DUMMY_DATASETS_SERVER_USER__/letters", "config": "__DUMMY_DATASETS_SERVER_USER__--letters", "split": "train", "column_name": "label", "histogram": { "hist": [ 10, 0, 0, 0, 0, 11, 0, 0, 0, 5 ], "bin_edges": [ 0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0 ] } }, { "dataset": "__DUMMY_DATASETS_SERVER_USER__/letters", "config": "__DUMMY_DATASETS_SERVER_USER__--letters", "split": "train", "column_name": "id", "histogram": { "hist": [ 3, 2, 3, 2, 3, 2, 3, 2, 3, 3 ], "bin_edges": [ 0.0, 2.5, 5.0, 7.5, 10.0, 12.5, 15.0, 17.5, 20.0, 22.5, 25.0 ] } }, { "dataset": "__DUMMY_DATASETS_SERVER_USER__/letters", "config": "__DUMMY_DATASETS_SERVER_USER__--letters", "split": "train", "column_name": "fr_scrabble_value", "histogram": { "hist": [ 10, 3, 3, 3, 0, 0, 0, 2, 0, 5 ], "bin_edges": [ 1.0, 1.9, 2.8, 3.7, 4.6, 5.5, 6.4, 7.3, 8.2, 9.1, 10.0 ] } } ] } ``` </details> TODO: - [ ] tests - [ ] docs cc @severo related to https://github.com/huggingface/datasets-server/issues/376
Basic stats: Started a `/basic-stats` endpoints that computes histogram for numerical data using dask. Not sure if we want to merge this feature right away, I implemented this mostly to trigger some discussions on how to add new data aggregates: does this way sound correct to you ? Feel free to also comment on how we could expand this to include stats for non-numerical columns. How can we add new stats that would be returned in the same endpoint ? You can try it on `__DUMMY_DATASETS_SERVER_USER__/letters` for example: <details> <summary>Results of `/basic-stats` on `__DUMMY_DATASETS_SERVER_USER__/letters`</summary> ```json { "basic_stats": [ { "dataset": "__DUMMY_DATASETS_SERVER_USER__/letters", "config": "__DUMMY_DATASETS_SERVER_USER__--letters", "split": "train", "column_name": "label", "histogram": { "hist": [ 10, 0, 0, 0, 0, 11, 0, 0, 0, 5 ], "bin_edges": [ 0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0 ] } }, { "dataset": "__DUMMY_DATASETS_SERVER_USER__/letters", "config": "__DUMMY_DATASETS_SERVER_USER__--letters", "split": "train", "column_name": "id", "histogram": { "hist": [ 3, 2, 3, 2, 3, 2, 3, 2, 3, 3 ], "bin_edges": [ 0.0, 2.5, 5.0, 7.5, 10.0, 12.5, 15.0, 17.5, 20.0, 22.5, 25.0 ] } }, { "dataset": "__DUMMY_DATASETS_SERVER_USER__/letters", "config": "__DUMMY_DATASETS_SERVER_USER__--letters", "split": "train", "column_name": "fr_scrabble_value", "histogram": { "hist": [ 10, 3, 3, 3, 0, 0, 0, 2, 0, 5 ], "bin_edges": [ 1.0, 1.9, 2.8, 3.7, 4.6, 5.5, 6.4, 7.3, 8.2, 9.1, 10.0 ] } } ] } ``` </details> TODO: - [ ] tests - [ ] docs cc @severo related to https://github.com/huggingface/datasets-server/issues/376
closed
2023-02-08T17:21:27Z
2023-02-17T14:08:21Z
2023-02-17T14:04:05Z
lhoestq
1,576,505,891
Check dataset connection before migration job (and other apps)
Before starting the services (api, admin) and workers, we ensure the database is accessible and the assets directory (if needed) exists. In the case of the migration job: if the database cannot be accessed, we skip the migration, to avoid blocking Helm Fixes #763. Replaces #767. Depends on #791.
Check dataset connection before migration job (and other apps): Before starting the services (api, admin) and workers, we ensure the database is accessible and the assets directory (if needed) exists. In the case of the migration job: if the database cannot be accessed, we skip the migration, to avoid blocking Helm Fixes #763. Replaces #767. Depends on #791.
closed
2023-02-08T17:14:07Z
2023-02-10T17:23:58Z
2023-02-10T17:20:41Z
severo
1,576,432,647
use classmethod for factories instead of staticmethod
See https://stackoverflow.com/questions/12179271/meaning-of-classmethod-and-staticmethod-for-beginner for example Depends on: #790
use classmethod for factories instead of staticmethod: See https://stackoverflow.com/questions/12179271/meaning-of-classmethod-and-staticmethod-for-beginner for example Depends on: #790
closed
2023-02-08T16:27:04Z
2023-02-10T16:32:18Z
2023-02-10T16:29:43Z
severo
1,576,413,974
feat: 🎸 ensure immutability of the configs
if we decide to allow changing config parameters later, it will need to be explicit. Depends on: #784
feat: 🎸 ensure immutability of the configs: if we decide to allow changing config parameters later, it will need to be explicit. Depends on: #784
closed
2023-02-08T16:14:32Z
2023-02-10T16:10:19Z
2023-02-10T16:07:28Z
severo
1,576,145,015
feat: 🎸 add logs when an unexpected error occurs
null
feat: 🎸 add logs when an unexpected error occurs:
closed
2023-02-08T13:40:45Z
2023-02-08T14:52:32Z
2023-02-08T14:49:31Z
severo
1,575,998,706
feat: remove job after 5 minutes
To avoid to block the uninstall process : the PVC wait for job deletion before remove.
feat: remove job after 5 minutes: To avoid to block the uninstall process : the PVC wait for job deletion before remove.
closed
2023-02-08T12:05:15Z
2023-02-08T15:49:33Z
2023-02-08T15:46:44Z
rtrompier
1,575,849,150
Fix dockerfiles
Running `make e2e` locally, I had the following error: ``` => ERROR [e2e-worker-dataset-info 11/13] RUN poetry install --no-cache 0.4s ------ > [e2e-worker-dataset-info 11/13] RUN poetry install --no-cache: #0 0.345 runc run failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory ``` Adding the path fixes the issue.
Fix dockerfiles: Running `make e2e` locally, I had the following error: ``` => ERROR [e2e-worker-dataset-info 11/13] RUN poetry install --no-cache 0.4s ------ > [e2e-worker-dataset-info 11/13] RUN poetry install --no-cache: #0 0.345 runc run failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory ``` Adding the path fixes the issue.
closed
2023-02-08T10:16:56Z
2023-02-08T12:00:35Z
2023-02-08T11:57:24Z
severo
1,575,741,404
ci: 🎡 run e2e tests only once for a push or pull-request
see https://github.com/huggingface/datasets-server/pull/775#issuecomment-1422255785.
ci: 🎡 run e2e tests only once for a push or pull-request: see https://github.com/huggingface/datasets-server/pull/775#issuecomment-1422255785.
closed
2023-02-08T09:04:35Z
2023-02-08T11:58:51Z
2023-02-08T11:56:15Z
severo
1,575,223,675
Dataset Viewer issue for allenai/scirepeval
### Link https://huggingface.co/datasets/allenai/scirepeval ### Description The dataset viewer is not working for dataset allenai/scirepeval. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for allenai/scirepeval: ### Link https://huggingface.co/datasets/allenai/scirepeval ### Description The dataset viewer is not working for dataset allenai/scirepeval. Error details: ``` Error code: ResponseNotReady ```
closed
2023-02-08T00:04:58Z
2023-02-08T15:59:04Z
2023-02-08T15:59:03Z
amanpreet692
1,575,000,363
feat: 🎸 add concept of Resource
This long PR creates a new concept, Resource. The resource is aimed at being allocated and then released after use: connection to a database, modification of the datasets library config, or creation of storage directories... Before this PR, it was done in the Config step. Now the Config step should never raise and should be immutable, and the Resources are created based on that config. It makes it "easier" to understand what is failing in the tests and in the app. GOOD LUCK to the reviewers, and sorry for the length.
feat: 🎸 add concept of Resource: This long PR creates a new concept, Resource. The resource is aimed at being allocated and then released after use: connection to a database, modification of the datasets library config, or creation of storage directories... Before this PR, it was done in the Config step. Now the Config step should never raise and should be immutable, and the Resources are created based on that config. It makes it "easier" to understand what is failing in the tests and in the app. GOOD LUCK to the reviewers, and sorry for the length.
closed
2023-02-07T20:47:27Z
2023-02-10T15:11:17Z
2023-02-10T15:08:39Z
severo
1,574,932,026
Updating docker image hash
Updating docker image hash values
Updating docker image hash: Updating docker image hash values
closed
2023-02-07T19:51:32Z
2023-02-07T20:02:07Z
2023-02-07T19:56:06Z
AndreaFrancis
1,574,829,491
Catch KILL signal from the worker to exit cleanly
in the worker_loop
Catch KILL signal from the worker to exit cleanly: in the worker_loop
closed
2023-02-07T18:25:10Z
2023-03-28T15:04:19Z
2023-03-28T15:04:19Z
severo
1,574,575,096
Update the repo. and remove remaining variables that are not in use.
Update the repo. and remove remaining variables that are not in use.
Update the repo. and remove remaining variables that are not in use. : Update the repo. and remove remaining variables that are not in use.
closed
2023-02-07T15:50:09Z
2023-02-07T16:00:41Z
2023-02-07T15:59:01Z
JatinKumar001
1,574,536,454
Dataset info big content error
Adding content size validation before trying to insert/update in db. Fix for https://github.com/huggingface/datasets-server/issues/762 and https://github.com/huggingface/datasets-server/issues/770 New `DATASETS_BASED_CONTENT_MAX_BYTES` configuration is added to limit the size of the content result of a worker compute process.
Dataset info big content error: Adding content size validation before trying to insert/update in db. Fix for https://github.com/huggingface/datasets-server/issues/762 and https://github.com/huggingface/datasets-server/issues/770 New `DATASETS_BASED_CONTENT_MAX_BYTES` configuration is added to limit the size of the content result of a worker compute process.
closed
2023-02-07T15:26:20Z
2023-02-09T17:58:01Z
2023-02-09T17:55:34Z
AndreaFrancis
1,573,994,373
Pass processing step to worker
null
Pass processing step to worker:
closed
2023-02-07T09:38:20Z
2023-02-07T13:15:11Z
2023-02-07T12:35:51Z
severo
1,572,954,432
Fix CI mypy error: "WorkerFactory" has no attribute "app_config"
Fix type checking on WorkerLoop.loop method, when tryin to access the worker_factory's attribute app_config. This PR fixes an issue introduced by: - #774 ``` src/datasets_based/worker_loop.py:100: error: "WorkerFactory" has no attribute "app_config" ```
Fix CI mypy error: "WorkerFactory" has no attribute "app_config": Fix type checking on WorkerLoop.loop method, when tryin to access the worker_factory's attribute app_config. This PR fixes an issue introduced by: - #774 ``` src/datasets_based/worker_loop.py:100: error: "WorkerFactory" has no attribute "app_config" ```
closed
2023-02-06T17:11:28Z
2023-02-07T12:28:29Z
2023-02-07T12:25:40Z
albertvillanova
1,572,543,532
Remove variable
null
Remove variable:
closed
2023-02-06T13:11:08Z
2023-02-07T15:41:42Z
2023-02-06T14:19:17Z
JatinKumar001
1,572,509,231
Dataset Viewer issue for mozilla-foundation/common_voice_11_0
### Link https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0 ### Description The dataset viewer for Portuguese Language is not working for dataset mozilla-foundation/common_voice_11_0. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for mozilla-foundation/common_voice_11_0: ### Link https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0 ### Description The dataset viewer for Portuguese Language is not working for dataset mozilla-foundation/common_voice_11_0. Error details: ``` Error code: ResponseNotReady ```
closed
2023-02-06T12:47:00Z
2023-02-12T15:33:59Z
2023-02-08T15:58:26Z
gassis
1,572,423,152
ci: 🎡 the e2e tests must now be run on any code change
null
ci: 🎡 the e2e tests must now be run on any code change:
closed
2023-02-06T11:46:55Z
2023-02-08T09:02:07Z
2023-02-07T15:45:40Z
severo
1,572,400,886
Use hub-ci locally
I switched the HF endpoint to the hub-ci one and added the corresponding tokens, and fixed the docker-compose to use the right base. Now the local dev environment from `make dev-start` works correctly. Close https://github.com/huggingface/datasets-server/issues/765
Use hub-ci locally: I switched the HF endpoint to the hub-ci one and added the corresponding tokens, and fixed the docker-compose to use the right base. Now the local dev environment from `make dev-start` works correctly. Close https://github.com/huggingface/datasets-server/issues/765
closed
2023-02-06T11:29:07Z
2023-02-06T13:59:55Z
2023-02-06T13:56:38Z
lhoestq
1,572,337,574
refactor: 💡 hard-code the value of the fallback
The fallback will be removed once https://github.com/huggingface/datasets-server/issues/755 is implemented. Meanwhile, we hide the parameter to prepare deprecation.
refactor: 💡 hard-code the value of the fallback: The fallback will be removed once https://github.com/huggingface/datasets-server/issues/755 is implemented. Meanwhile, we hide the parameter to prepare deprecation.
closed
2023-02-06T10:48:28Z
2023-02-06T11:27:34Z
2023-02-06T11:24:58Z
severo
1,572,242,085
Make workers' errors derive from WorkerError
Make workers' errors derive from `WorkerError`, instead of parent `CustomError`.
Make workers' errors derive from WorkerError: Make workers' errors derive from `WorkerError`, instead of parent `CustomError`.
closed
2023-02-06T09:44:47Z
2023-02-07T14:07:19Z
2023-02-07T14:04:03Z
albertvillanova
1,571,008,308
remove first rows fallback variable
Remove the FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE from the code.
remove first rows fallback variable: Remove the FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE from the code.
closed
2023-02-04T16:06:47Z
2023-02-08T10:44:28Z
2023-02-08T10:41:40Z
JatinKumar001
1,569,553,489
Add a check in /first-rows worker if truncating didn't succeed
See https://github.com/huggingface/datasets-server/pull/749#discussion_r1095587218
Add a check in /first-rows worker if truncating didn't succeed: See https://github.com/huggingface/datasets-server/pull/749#discussion_r1095587218
closed
2023-02-03T09:48:55Z
2023-02-13T14:44:56Z
2023-02-13T14:44:55Z
severo
1,569,542,931
Remove `FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE` from code
It's not used anymore.
Remove `FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE` from code: It's not used anymore.
closed
2023-02-03T09:44:14Z
2023-02-13T11:17:46Z
2023-02-13T11:17:45Z
severo
1,568,648,796
Create doc for every PR
This is what is done in the other HF repos on github. This way the Delete doc github action has something to delete. If the doc doesn't exists, the job fails
Create doc for every PR: This is what is done in the other HF repos on github. This way the Delete doc github action has something to delete. If the doc doesn't exists, the job fails
closed
2023-02-02T19:23:42Z
2023-02-03T11:10:16Z
2023-02-03T11:04:53Z
lhoestq
1,568,245,507
Check dataset before migration job
null
Check dataset before migration job:
closed
2023-02-02T15:17:58Z
2023-02-08T17:15:09Z
2023-02-08T17:15:05Z
severo