id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
β | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,568,224,345 | Locally use volumes for workers code | This way in a local dev env, one can restart a single worker container to apply their code changes instead of running `make stop` and `make start` again | Locally use volumes for workers code: This way in a local dev env, one can restart a single worker container to apply their code changes instead of running `make stop` and `make start` again | closed | 2023-02-02T15:05:51Z | 2023-02-03T11:09:16Z | 2023-02-03T11:09:14Z | lhoestq |
1,568,158,540 | /sizes doesn't work in local dev env | it always returns
```json
{
"error": "The response is not ready yet. Please retry later."
}
```
| /sizes doesn't work in local dev env : it always returns
```json
{
"error": "The response is not ready yet. Please retry later."
}
```
| closed | 2023-02-02T14:26:29Z | 2023-02-06T13:56:40Z | 2023-02-06T13:56:40Z | lhoestq |
1,568,143,315 | Calling /api endpoints create unnecessary jobs | For example calling /config-names does create a /config-names job even though the result is cached.
The job is then skipped since the result is cached. | Calling /api endpoints create unnecessary jobs: For example calling /config-names does create a /config-names job even though the result is cached.
The job is then skipped since the result is cached. | closed | 2023-02-02T14:17:23Z | 2023-02-14T16:17:53Z | 2023-02-14T16:17:53Z | lhoestq |
1,567,772,063 | Check if the database exists/is accessible in the migration job | In some cases, the migration job can be run (in k8s) while the mongo database does not exist. In that case, it fails and the install/upgrade is blocked.
As the database does not need to be migrated if it does not exist yet, we should test if we can access the database, and if not, return with a success without doing anything.
found by @rtrompier | Check if the database exists/is accessible in the migration job: In some cases, the migration job can be run (in k8s) while the mongo database does not exist. In that case, it fails and the install/upgrade is blocked.
As the database does not need to be migrated if it does not exist yet, we should test if we can access the database, and if not, return with a success without doing anything.
found by @rtrompier | closed | 2023-02-02T10:30:08Z | 2023-02-10T17:20:42Z | 2023-02-10T17:20:42Z | severo |
1,567,764,415 | Handle the case where the DatasetInfo is too big | In the /parquet-and-dataset-info processing step, if DatasetInfo is over 16MB, we will not be able to store it in MongoDB (https://pymongo.readthedocs.io/en/stable/api/pymongo/errors.html#pymongo.errors.DocumentTooLarge). We have to handle this case, and return a clear error to the user.
See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675332303097889 (internal). It's a similar issue to https://github.com/huggingface/datasets-server/issues/731 (should be raised for that dataset, btw) | Handle the case where the DatasetInfo is too big: In the /parquet-and-dataset-info processing step, if DatasetInfo is over 16MB, we will not be able to store it in MongoDB (https://pymongo.readthedocs.io/en/stable/api/pymongo/errors.html#pymongo.errors.DocumentTooLarge). We have to handle this case, and return a clear error to the user.
See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675332303097889 (internal). It's a similar issue to https://github.com/huggingface/datasets-server/issues/731 (should be raised for that dataset, btw) | closed | 2023-02-02T10:25:19Z | 2023-02-13T13:48:06Z | 2023-02-13T13:48:05Z | severo |
1,567,611,663 | update the logic to skip a job | Instead of retrying for any non-successful response in the cache, we
only retry if the error is in the list of "retry-able" errors. Also:
refactor the logic and add complete tests | update the logic to skip a job: Instead of retrying for any non-successful response in the cache, we
only retry if the error is in the list of "retry-able" errors. Also:
refactor the logic and add complete tests | closed | 2023-02-02T09:02:44Z | 2023-02-02T12:55:58Z | 2023-02-02T12:55:07Z | severo |
1,566,555,843 | Add refresh dataset ui | Allow to force refresh some datasets
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/42851186/216124200-59076b70-b910-4242-964d-5e528c7196d0.png">
| Add refresh dataset ui: Allow to force refresh some datasets
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/42851186/216124200-59076b70-b910-4242-964d-5e528c7196d0.png">
| closed | 2023-02-01T17:56:38Z | 2023-02-02T19:41:19Z | 2023-02-02T19:41:18Z | lhoestq |
1,566,447,700 | test: π ensure the database is ready in the tests | add a dependency to the app_config fixture to be sure to have access to the database when running an individual test, with `TEST_PATH="tests/test_worker.py" make test` | test: π ensure the database is ready in the tests: add a dependency to the app_config fixture to be sure to have access to the database when running an individual test, with `TEST_PATH="tests/test_worker.py" make test` | closed | 2023-02-01T16:43:23Z | 2023-02-02T09:01:19Z | 2023-02-02T09:01:17Z | severo |
1,566,352,167 | ci: π‘ only run on PR and on main | currently, the actions are run twice in the PRs. See https://github.com/huggingface/datasets-server/pull/757/commits/c80eeacfdb2839149ad8b7b81cdaf6b0b4fcb944 for example. | ci: π‘ only run on PR and on main: currently, the actions are run twice in the PRs. See https://github.com/huggingface/datasets-server/pull/757/commits/c80eeacfdb2839149ad8b7b81cdaf6b0b4fcb944 for example. | closed | 2023-02-01T15:46:20Z | 2023-02-02T09:03:20Z | 2023-02-02T09:03:18Z | severo |
1,566,340,449 | refactor: π‘ remove dead code | null | refactor: π‘ remove dead code: | closed | 2023-02-01T15:39:02Z | 2023-02-01T15:47:26Z | 2023-02-01T15:47:24Z | severo |
1,566,326,962 | Skip the job depending on the type of error | Currently, we never skip a job if the previous run has returned an error. But it should be conditional: only retry for an allowlist of errors (e.g., ConnectionError), not the other way: for the same commit, we will get the same result and just lose time and resources.
Here: https://github.com/huggingface/datasets-server/blob/893a70cb7201090b8c64cd127fbe029c723f2aa3/workers/datasets_based/src/datasets_based/worker.py#L221 | Skip the job depending on the type of error: Currently, we never skip a job if the previous run has returned an error. But it should be conditional: only retry for an allowlist of errors (e.g., ConnectionError), not the other way: for the same commit, we will get the same result and just lose time and resources.
Here: https://github.com/huggingface/datasets-server/blob/893a70cb7201090b8c64cd127fbe029c723f2aa3/workers/datasets_based/src/datasets_based/worker.py#L221 | closed | 2023-02-01T15:30:31Z | 2023-02-13T11:15:56Z | 2023-02-13T11:15:56Z | severo |
1,566,302,997 | Fill /split-names and /first-rows from /parquet-and-dataset-info if needed | /split-names and /first-rows are created using streaming. They might fail for this reason while /parquet-and-dataset-info works for the same dataset.
In this case, we should fill /split-names (from /dataset-info) and /first-rows (from /parquet).
Note that we would generate a concurrence between two execution paths for the same result: we need to manage this by avoiding launching one of them if we already have the result, and avoiding updating the result if it has already been computed (in the case of concurrent jobs)
| Fill /split-names and /first-rows from /parquet-and-dataset-info if needed: /split-names and /first-rows are created using streaming. They might fail for this reason while /parquet-and-dataset-info works for the same dataset.
In this case, we should fill /split-names (from /dataset-info) and /first-rows (from /parquet).
Note that we would generate a concurrence between two execution paths for the same result: we need to manage this by avoiding launching one of them if we already have the result, and avoiding updating the result if it has already been computed (in the case of concurrent jobs)
| closed | 2023-02-01T15:16:30Z | 2023-04-03T16:15:41Z | 2023-04-03T16:08:30Z | severo |
1,566,250,070 | Improve error messages | Related to:
- #745
- #718 | Improve error messages: Related to:
- #745
- #718 | closed | 2023-02-01T14:48:56Z | 2023-02-24T13:24:53Z | 2023-02-24T13:22:29Z | albertvillanova |
1,566,199,878 | Delete extracted the downloaded files just after being extracted | ```
from datasets import load_dataset, DownloadConfig
download_config = DownloadConfig(delete_extracted=True)
dataset = load_dataset("./codeparrot", split="train", download_config=download_config)
```
internal ref: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675260906599029?thread_ts=1675260896.849119&cid=C04L6P8KNQ5
It will help to reduce the disk usage consumed by the workers' cache. | Delete extracted the downloaded files just after being extracted: ```
from datasets import load_dataset, DownloadConfig
download_config = DownloadConfig(delete_extracted=True)
dataset = load_dataset("./codeparrot", split="train", download_config=download_config)
```
internal ref: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675260906599029?thread_ts=1675260896.849119&cid=C04L6P8KNQ5
It will help to reduce the disk usage consumed by the workers' cache. | closed | 2023-02-01T14:17:22Z | 2023-02-22T18:49:26Z | 2023-02-22T18:49:26Z | severo |
1,565,049,472 | remove docker-images.yaml, and fix dev.yaml | null | remove docker-images.yaml, and fix dev.yaml: | closed | 2023-01-31T21:48:22Z | 2023-02-01T10:09:14Z | 2023-02-01T10:09:13Z | severo |
1,565,030,653 | Refactor to have only one app accessing a mongodb database | As assessed by @rtrompier, it's generally a bad idea to have multiple apps accessing the same database concurrently. Ideally, one app should have access to the database, and expose a REST API for the other apps. | Refactor to have only one app accessing a mongodb database : As assessed by @rtrompier, it's generally a bad idea to have multiple apps accessing the same database concurrently. Ideally, one app should have access to the database, and expose a REST API for the other apps. | closed | 2023-01-31T21:29:42Z | 2023-04-06T15:04:08Z | 2023-04-06T15:04:08Z | severo |
1,564,737,479 | Allow list some datasets for some jobs | This way when adding a new job we can apply it on a small amount of datasets, before spending too much time making it work for all the datasets | Allow list some datasets for some jobs: This way when adding a new job we can apply it on a small amount of datasets, before spending too much time making it work for all the datasets | closed | 2023-01-31T17:34:00Z | 2023-04-08T15:04:13Z | 2023-04-08T15:04:13Z | lhoestq |
1,564,688,334 | Adding custom exception when cache insert fails because of too many columns | Adding custom exception to fix scenarios like https://github.com/huggingface/datasets-server/issues/731 | Adding custom exception when cache insert fails because of too many columns: Adding custom exception to fix scenarios like https://github.com/huggingface/datasets-server/issues/731 | closed | 2023-01-31T17:00:08Z | 2023-02-03T09:49:20Z | 2023-02-02T18:18:33Z | AndreaFrancis |
1,564,628,030 | feat: πΈ update docker images | null | feat: πΈ update docker images: | closed | 2023-01-31T16:21:45Z | 2023-01-31T16:25:02Z | 2023-01-31T16:25:00Z | severo |
1,564,547,805 | fix: π fix the migration scripts to be able to run on new base | fixes #744 | fix: π fix the migration scripts to be able to run on new base: fixes #744 | closed | 2023-01-31T15:32:24Z | 2023-01-31T16:20:52Z | 2023-01-31T15:36:42Z | severo |
1,564,545,755 | Add HF_TOKEN env var for admin ui | this way we can deploy in a private Space and don't ask for the token | Add HF_TOKEN env var for admin ui: this way we can deploy in a private Space and don't ask for the token | closed | 2023-01-31T15:31:03Z | 2023-01-31T15:40:02Z | 2023-01-31T15:40:00Z | lhoestq |
1,564,494,320 | Improve the error messages in the dataset viewer | The error messages are shown in the dataset viewer. We should try to improve them:
- help the user understand what is going on
- help the user fix the issue
- show that we understand what is going on, and that it's not a bug.
See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675171833108199 (internal) and https://github.com/huggingface/datasets/issues/4886.
Also:
- https://github.com/huggingface/datasets-server/issues/731#issuecomment-1410297107
- https://github.com/huggingface/datasets-server/issues/718 | Improve the error messages in the dataset viewer: The error messages are shown in the dataset viewer. We should try to improve them:
- help the user understand what is going on
- help the user fix the issue
- show that we understand what is going on, and that it's not a bug.
See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675171833108199 (internal) and https://github.com/huggingface/datasets/issues/4886.
Also:
- https://github.com/huggingface/datasets-server/issues/731#issuecomment-1410297107
- https://github.com/huggingface/datasets-server/issues/718 | closed | 2023-01-31T15:07:54Z | 2023-04-06T15:04:10Z | 2023-04-06T15:04:10Z | severo |
1,564,463,708 | Migration job fails on a new database | When run on a new database, where the migrations have never been run, the migration script that adds a "force" field fails with:
```
ERROR: 2023-01-31 14:36:28,825 - root - Migration failed: The fields "{'priority'}" do not exist on the document "JobSnapshot"
```
Reported by @rtrompier | Migration job fails on a new database: When run on a new database, where the migrations have never been run, the migration script that adds a "force" field fails with:
```
ERROR: 2023-01-31 14:36:28,825 - root - Migration failed: The fields "{'priority'}" do not exist on the document "JobSnapshot"
```
Reported by @rtrompier | closed | 2023-01-31T14:52:11Z | 2023-01-31T15:36:44Z | 2023-01-31T15:36:44Z | severo |
1,564,459,321 | fix: π disable the mongodbMigration job for now | it breaks on a new database. Disabling it until we fix the issue. | fix: π disable the mongodbMigration job for now: it breaks on a new database. Disabling it until we fix the issue. | closed | 2023-01-31T14:49:41Z | 2023-01-31T14:53:10Z | 2023-01-31T14:53:09Z | severo |
1,564,452,845 | fix admin ui requirements.txt | null | fix admin ui requirements.txt: | closed | 2023-01-31T14:46:03Z | 2023-01-31T15:26:38Z | 2023-01-31T15:26:37Z | lhoestq |
1,564,325,655 | Detect the "zombie" jobs, and kill them π§ π« | Sometimes the pods crash:
```
prod-datasets-server-worker-first-rows-8579994756-vgpkg 0/1 OutOfmemory 0 92m
prod-datasets-server-worker-first-rows-8579994756-vmvk7 0/1 OutOfmemory 0 92m
prod-datasets-server-worker-first-rows-8579994756-vsmmt 1/1 Running 0 4h27m
prod-datasets-server-worker-first-rows-8579994756-vxtwn 0/1 OutOfmemory 0 3h12m
prod-datasets-server-worker-first-rows-8579994756-vzs6j 0/1 OutOfmemory 0 3h12m
prod-datasets-server-worker-first-rows-8579994756-w2k55 1/1 Running 2 (3h25m ago) 4h16m
prod-datasets-server-worker-first-rows-8579994756-w5c6m 1/1 Running 0 4h27m
prod-datasets-server-worker-first-rows-8579994756-w5ks6 1/1 Running 1 (4h21m ago) 4h27m
prod-datasets-server-worker-first-rows-8579994756-w7ds5 1/1 Running 0 4h27m
prod-datasets-server-worker-first-rows-8579994756-wbqlq 1/1 Running 0 4h16m
```
The job that was running then stays forever in the "started" status, what we can call a zombie.
The issue is that, for the rules implemented in the queue logic, it can prevent other jobs for the same dataset, or for the same user, to be processed. It even prevents to re-run the same job.
Ideally, we should detect that the job has failed, change its status to "error" and put an error response in the cache database.
To implement this, an option proposed by @XciD is:
- to have a parallel thread (heartbeat) that will update the job in the database every xxx seconds
- a "garbage collector" loop will look for zombies and finish them as described above
| Detect the "zombie" jobs, and kill them π§ π«: Sometimes the pods crash:
```
prod-datasets-server-worker-first-rows-8579994756-vgpkg 0/1 OutOfmemory 0 92m
prod-datasets-server-worker-first-rows-8579994756-vmvk7 0/1 OutOfmemory 0 92m
prod-datasets-server-worker-first-rows-8579994756-vsmmt 1/1 Running 0 4h27m
prod-datasets-server-worker-first-rows-8579994756-vxtwn 0/1 OutOfmemory 0 3h12m
prod-datasets-server-worker-first-rows-8579994756-vzs6j 0/1 OutOfmemory 0 3h12m
prod-datasets-server-worker-first-rows-8579994756-w2k55 1/1 Running 2 (3h25m ago) 4h16m
prod-datasets-server-worker-first-rows-8579994756-w5c6m 1/1 Running 0 4h27m
prod-datasets-server-worker-first-rows-8579994756-w5ks6 1/1 Running 1 (4h21m ago) 4h27m
prod-datasets-server-worker-first-rows-8579994756-w7ds5 1/1 Running 0 4h27m
prod-datasets-server-worker-first-rows-8579994756-wbqlq 1/1 Running 0 4h16m
```
The job that was running then stays forever in the "started" status, what we can call a zombie.
The issue is that, for the rules implemented in the queue logic, it can prevent other jobs for the same dataset, or for the same user, to be processed. It even prevents to re-run the same job.
Ideally, we should detect that the job has failed, change its status to "error" and put an error response in the cache database.
To implement this, an option proposed by @XciD is:
- to have a parallel thread (heartbeat) that will update the job in the database every xxx seconds
- a "garbage collector" loop will look for zombies and finish them as described above
| closed | 2023-01-31T13:37:32Z | 2023-02-17T18:22:49Z | 2023-02-17T18:15:07Z | severo |
1,564,307,540 | Convert the /backfill endpoint to a kubernetes Job run periodically | See https://github.com/huggingface/datasets-server/pull/708#issuecomment-1406319901 and following comments.
Maybe we should not have this endpoint as such. See the related issue: https://github.com/huggingface/datasets-server/issues/736. Maybe #736 should be fixed first, then see how we implement a backfill trigger. | Convert the /backfill endpoint to a kubernetes Job run periodically: See https://github.com/huggingface/datasets-server/pull/708#issuecomment-1406319901 and following comments.
Maybe we should not have this endpoint as such. See the related issue: https://github.com/huggingface/datasets-server/issues/736. Maybe #736 should be fixed first, then see how we implement a backfill trigger. | closed | 2023-01-31T13:25:52Z | 2023-05-09T07:50:12Z | 2023-05-09T07:50:12Z | severo |
1,564,230,254 | feat: merge helm lint action with publish | null | feat: merge helm lint action with publish: | closed | 2023-01-31T12:35:16Z | 2023-01-31T12:44:24Z | 2023-01-31T12:44:23Z | rtrompier |
1,564,209,389 | fix: remove mongo migration job execution on pre-install hook | null | fix: remove mongo migration job execution on pre-install hook: | closed | 2023-01-31T12:23:09Z | 2023-01-31T12:28:15Z | 2023-01-31T12:28:14Z | rtrompier |
1,564,189,257 | Use a generic worker | All the workers use the same codebase, but we assign only one processing step to each worker at startup. We could instead allow a worker to handle all (or part of) the processing steps and let the queue manager give it the most adequate job.
By the way, the current way of doing is a particular case, where the set of supported processing steps for each worker only contain one element.
In the transition, we could have part of the workers working as of now, while some other workers are "generic", or manage a set of several steps...
The benefit would be to make it a lot easier to handle the deployment and scaling of workers, and avoiding lazy workers. | Use a generic worker: All the workers use the same codebase, but we assign only one processing step to each worker at startup. We could instead allow a worker to handle all (or part of) the processing steps and let the queue manager give it the most adequate job.
By the way, the current way of doing is a particular case, where the set of supported processing steps for each worker only contain one element.
In the transition, we could have part of the workers working as of now, while some other workers are "generic", or manage a set of several steps...
The benefit would be to make it a lot easier to handle the deployment and scaling of workers, and avoiding lazy workers. | closed | 2023-01-31T12:10:06Z | 2023-02-13T15:29:25Z | 2023-02-13T15:29:25Z | severo |
1,564,140,661 | Avoid creating jobs as much as possible | We are often facing a full queue that blocks the viewer on the Hub from being available (https://github.com/huggingface/datasets-server/issues/725, https://github.com/huggingface/datasets-server/issues/704, etc.)
It's generally the /first-rows queue.
There are several ideas to improve this. One of them is to detect as soon as possible which jobs are useless because they will be skipped when processed by the worker. I suspect that a large number of the processed jobs is skipped (to do: get numbers), and that being able to remove that unnecessary burden on the workers would help.
This means, when adding a job, test if the job would be skipped or not. If it would be skipped, I think that we cannot just avoid creating it and forget, because a dependent processing step could still need to be computed (new step, error, outdated). I think that we should effectively avoid creating the job, but still run the post-processing action to try to add the dependent jobs (and do the same).
I think what this means is separating the "event" that triggers a possible refresh for a dataset, and the "materialization" step (see https://docs.dagster.io/concepts/assets/asset-materializations) where we look at which operations are needed, and run them. | Avoid creating jobs as much as possible: We are often facing a full queue that blocks the viewer on the Hub from being available (https://github.com/huggingface/datasets-server/issues/725, https://github.com/huggingface/datasets-server/issues/704, etc.)
It's generally the /first-rows queue.
There are several ideas to improve this. One of them is to detect as soon as possible which jobs are useless because they will be skipped when processed by the worker. I suspect that a large number of the processed jobs is skipped (to do: get numbers), and that being able to remove that unnecessary burden on the workers would help.
This means, when adding a job, test if the job would be skipped or not. If it would be skipped, I think that we cannot just avoid creating it and forget, because a dependent processing step could still need to be computed (new step, error, outdated). I think that we should effectively avoid creating the job, but still run the post-processing action to try to add the dependent jobs (and do the same).
I think what this means is separating the "event" that triggers a possible refresh for a dataset, and the "materialization" step (see https://docs.dagster.io/concepts/assets/asset-materializations) where we look at which operations are needed, and run them. | closed | 2023-01-31T11:33:50Z | 2023-02-14T15:07:06Z | 2023-02-14T15:07:05Z | severo |
1,564,130,989 | Change /parquet-and-dataset-info processing step | For now, /parquet-and-dataset-info is run for a dataset. But as it's a loop on all the configs, we could instead decide to turn it into a "config" job, ie. make it depend on the result of /config-names, and launch a job for each of the configs.
This means doing the same for the dependent processing steps: /dataset-info, /parquet, /sizes.
This means also being able to answer the endpoints to both:
- ?dataset=dataset&config=config: directly from the database
- ?dataset=dataset: an aggregation of the config responses. For this point, the discussion is the same as https://github.com/huggingface/datasets-server/issues/734: should we have a dedicate worker, or compute on the fly?
The benefit would be to decouple how different configs from the same dataset are treated, in particular, let a config work even if another one is broken.
A specific challenge is how to store the parquet files. Currently, we have a dedicated "branch" (refs/convert/parquet), with an arborescence with one subdirectory per config, and parquet files inside for all the splits of the config. I think this schema would still work well, but as it's versioned with git, maybe some details could arise (concurrent access? it should be handled with the [`parent_commit`](https://github.com/huggingface/datasets-server/blob/main/workers/datasets_based/src/datasets_based/workers/parquet_and_dataset_info.py#L625) parameter, at least an error would be thrown in case of concurrent write attempts).
---
Subtasks (should be implemented in that order):
- [x] https://github.com/huggingface/datasets-server/issues/863
- [x] https://github.com/huggingface/datasets-server/issues/864
- [x] https://github.com/huggingface/datasets-server/issues/865
- [x] https://github.com/huggingface/datasets-server/issues/866 | Change /parquet-and-dataset-info processing step: For now, /parquet-and-dataset-info is run for a dataset. But as it's a loop on all the configs, we could instead decide to turn it into a "config" job, ie. make it depend on the result of /config-names, and launch a job for each of the configs.
This means doing the same for the dependent processing steps: /dataset-info, /parquet, /sizes.
This means also being able to answer the endpoints to both:
- ?dataset=dataset&config=config: directly from the database
- ?dataset=dataset: an aggregation of the config responses. For this point, the discussion is the same as https://github.com/huggingface/datasets-server/issues/734: should we have a dedicate worker, or compute on the fly?
The benefit would be to decouple how different configs from the same dataset are treated, in particular, let a config work even if another one is broken.
A specific challenge is how to store the parquet files. Currently, we have a dedicated "branch" (refs/convert/parquet), with an arborescence with one subdirectory per config, and parquet files inside for all the splits of the config. I think this schema would still work well, but as it's versioned with git, maybe some details could arise (concurrent access? it should be handled with the [`parent_commit`](https://github.com/huggingface/datasets-server/blob/main/workers/datasets_based/src/datasets_based/workers/parquet_and_dataset_info.py#L625) parameter, at least an error would be thrown in case of concurrent write attempts).
---
Subtasks (should be implemented in that order):
- [x] https://github.com/huggingface/datasets-server/issues/863
- [x] https://github.com/huggingface/datasets-server/issues/864
- [x] https://github.com/huggingface/datasets-server/issues/865
- [x] https://github.com/huggingface/datasets-server/issues/866 | closed | 2023-01-31T11:26:18Z | 2023-05-09T12:34:18Z | 2023-05-09T12:34:18Z | severo |
1,564,093,033 | Modify /splits worker | The /splits endpoint is now redundant with the /config-names and the /split-names endpoints and can be computed from them.
One benefit is getting a list of configs and splits, even if some configs are erroneous. See #208 and #701.
We want to keep the /splits endpoint (used by the Hub, for example) but compute it from the other responses. Various options:
1. on every request to /splits, get all the /config-names and /split-names responses for the dataset from the database, and forge the response
2. same as 1., but putting a cache layer
3. precompute the /splits response as soon as possible, and put it in the database. 1. it would still require a worker (not a problem), 2. it should be launched after every update of a /config-names or /split-names for the dataset. Possibly the best option.
Pending questions:
- what should be the format if a /split-names response is an error, while the /config-names response is OK. The current /splits worker would generate an error response, which defeats #208 and #701. Ideally, we want to have a list of configs, with the list of splits, or an error if we cannot get them. But we have to keep backwards compatibility, or change it in the Hub and the other known clients.
- how to manage the transition? Even if the format change, I think that the old format would still be a valid result, and we would replace them at our own pace with the new format. | Modify /splits worker: The /splits endpoint is now redundant with the /config-names and the /split-names endpoints and can be computed from them.
One benefit is getting a list of configs and splits, even if some configs are erroneous. See #208 and #701.
We want to keep the /splits endpoint (used by the Hub, for example) but compute it from the other responses. Various options:
1. on every request to /splits, get all the /config-names and /split-names responses for the dataset from the database, and forge the response
2. same as 1., but putting a cache layer
3. precompute the /splits response as soon as possible, and put it in the database. 1. it would still require a worker (not a problem), 2. it should be launched after every update of a /config-names or /split-names for the dataset. Possibly the best option.
Pending questions:
- what should be the format if a /split-names response is an error, while the /config-names response is OK. The current /splits worker would generate an error response, which defeats #208 and #701. Ideally, we want to have a list of configs, with the list of splits, or an error if we cannot get them. But we have to keep backwards compatibility, or change it in the Hub and the other known clients.
- how to manage the transition? Even if the format change, I think that the old format would still be a valid result, and we would replace them at our own pace with the new format. | closed | 2023-01-31T11:02:22Z | 2023-04-10T12:17:57Z | 2023-04-10T12:17:56Z | severo |
1,563,909,876 | feat: πΈ adapt number of replicas to flush the queues | <img width="394" alt="Capture dβeΜcran 2023-01-31 aΜ 09 59 04" src="https://user-images.githubusercontent.com/1676121/215714662-1ede53b7-f50f-4a10-a367-115d49728e9b.png">
Current status: https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-15m&to=now&refresh=1m&var-slo_quantile_99_5=0.995&var-slo_quantile_85=0.85&var-slo_period=7d
A lot of jobs in /first-rows (critical for the Hub) and /parquet-and-dataset-info (less critical for now, only for parquet files) | feat: πΈ adapt number of replicas to flush the queues: <img width="394" alt="Capture dβeΜcran 2023-01-31 aΜ 09 59 04" src="https://user-images.githubusercontent.com/1676121/215714662-1ede53b7-f50f-4a10-a367-115d49728e9b.png">
Current status: https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-15m&to=now&refresh=1m&var-slo_quantile_99_5=0.995&var-slo_quantile_85=0.85&var-slo_period=7d
A lot of jobs in /first-rows (critical for the Hub) and /parquet-and-dataset-info (less critical for now, only for parquet files) | closed | 2023-01-31T08:59:53Z | 2023-01-31T09:32:01Z | 2023-01-31T09:31:59Z | severo |
1,563,027,428 | Add gradio admin interface | A simple Gradio app to show the pending jobs.
Contrary to the observable notebook, this can run locally and connect to your local dev environment.
I added a feature to see the number of jobs in the queue, and a simple SQL query feature that uses DuckDB.
The app can be extended to support more admin features e.g.
- refresh jobs
- cancel jobs
- show report
I'll also deploy it on Spaces
![image](https://user-images.githubusercontent.com/42851186/215569915-3fe18176-3e21-433e-bfb0-af6a7eef137e.png)
| Add gradio admin interface: A simple Gradio app to show the pending jobs.
Contrary to the observable notebook, this can run locally and connect to your local dev environment.
I added a feature to see the number of jobs in the queue, and a simple SQL query feature that uses DuckDB.
The app can be extended to support more admin features e.g.
- refresh jobs
- cancel jobs
- show report
I'll also deploy it on Spaces
![image](https://user-images.githubusercontent.com/42851186/215569915-3fe18176-3e21-433e-bfb0-af6a7eef137e.png)
| closed | 2023-01-30T19:02:36Z | 2023-01-31T14:28:56Z | 2023-01-31T14:28:55Z | lhoestq |
1,563,010,995 | Dataset Viewer issue for jonas/undp_jobs_raw | ### Link
https://huggingface.co/datasets/jonas/undp_jobs_raw
### Description
When going to the preview panel, this message is shown:
```
'update' command document too large
Error code: UnexpectedError
```
Other datasets with same issue:
- SamAct/medium_cleaned split train
- grasshoff--lhc_sents split train
- Sangmun/wiki_doc_preprocessed_withmaxlength split train
- DavidVivancos--MindBigData2022_Imagenet_IN split test and train
- heanu/soda split test and validation
- DavidVivancos/MindBigData_Imagenet_IN split train
- DavidVivancos/MindBigData2022_Imagenet_IN_Spct splitr test
When trying to investigate the issue I saw the following logs:
DEBUG: 2023-01-30 13:16:36,215 - root - the size of the first 10 rows (1087032102) is above the max number of bytes (-7173342), they will be truncated
DEBUG: 2023-01-30 13:16:40,076 - root - the size of the rows is now (11944186) after truncating row idx=0
It shows that the remaining space for the document without considering the rows is negative (-7173342) which means only the total space for columns is more that what is accepted.
When looking at the csv file for **jonas/undp_jobs_raw** dataset it looks like the datase has 103630 columns.
```
>>> import pandas as pd
>>> huge_ds = pd.read_csv('undp_jobs.csv')
>>> len(huge_ds.columns)
103630
>>> huge_ds.head(1)
0 1 ... 104998 104999
0 {'content': ['hiv and sti clinical consultant ... {'content': ['internship- pacific digital econ... ... {'content': []} {'content': ['deputy head of resident coordina...
[1 rows x 103630 columns]
>>>
```
Need to find a way to keep the columns and store them in the cache without issues. | Dataset Viewer issue for jonas/undp_jobs_raw: ### Link
https://huggingface.co/datasets/jonas/undp_jobs_raw
### Description
When going to the preview panel, this message is shown:
```
'update' command document too large
Error code: UnexpectedError
```
Other datasets with same issue:
- SamAct/medium_cleaned split train
- grasshoff--lhc_sents split train
- Sangmun/wiki_doc_preprocessed_withmaxlength split train
- DavidVivancos--MindBigData2022_Imagenet_IN split test and train
- heanu/soda split test and validation
- DavidVivancos/MindBigData_Imagenet_IN split train
- DavidVivancos/MindBigData2022_Imagenet_IN_Spct splitr test
When trying to investigate the issue I saw the following logs:
DEBUG: 2023-01-30 13:16:36,215 - root - the size of the first 10 rows (1087032102) is above the max number of bytes (-7173342), they will be truncated
DEBUG: 2023-01-30 13:16:40,076 - root - the size of the rows is now (11944186) after truncating row idx=0
It shows that the remaining space for the document without considering the rows is negative (-7173342) which means only the total space for columns is more that what is accepted.
When looking at the csv file for **jonas/undp_jobs_raw** dataset it looks like the datase has 103630 columns.
```
>>> import pandas as pd
>>> huge_ds = pd.read_csv('undp_jobs.csv')
>>> len(huge_ds.columns)
103630
>>> huge_ds.head(1)
0 1 ... 104998 104999
0 {'content': ['hiv and sti clinical consultant ... {'content': ['internship- pacific digital econ... ... {'content': []} {'content': ['deputy head of resident coordina...
[1 rows x 103630 columns]
>>>
```
Need to find a way to keep the columns and store them in the cache without issues. | closed | 2023-01-30T18:48:56Z | 2023-02-08T18:56:43Z | 2023-02-08T18:56:42Z | AndreaFrancis |
1,562,941,763 | fix: π fix two labels | null | fix: π fix two labels: | closed | 2023-01-30T18:13:26Z | 2023-01-30T18:23:56Z | 2023-01-30T18:23:55Z | severo |
1,562,784,943 | feat: publish helm chart on HF internal registry | null | feat: publish helm chart on HF internal registry: | closed | 2023-01-30T16:37:33Z | 2023-01-30T16:44:51Z | 2023-01-30T16:44:50Z | rtrompier |
1,562,685,337 | feat: πΈ add indexes, based on recommendations from mongo cloud | null | feat: πΈ add indexes, based on recommendations from mongo cloud: | closed | 2023-01-30T15:39:48Z | 2023-01-31T12:31:49Z | 2023-01-31T12:23:36Z | severo |
1,562,545,783 | Dataset Viewer issue for jmdu/summ-without-dialogs-t5 | ### Link
https://huggingface.co/datasets/jmdu/summ-without-dialogs-t5
### Description
The dataset viewer is not working for dataset jmdu/summ-without-dialogs-t5.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for jmdu/summ-without-dialogs-t5: ### Link
https://huggingface.co/datasets/jmdu/summ-without-dialogs-t5
### Description
The dataset viewer is not working for dataset jmdu/summ-without-dialogs-t5.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-30T14:23:48Z | 2023-01-31T09:45:19Z | 2023-01-31T09:45:19Z | jmdu99 |
1,562,544,141 | Dataset Viewer issue for jmdu/summ-with-dialogs-t5 | ### Link
https://huggingface.co/datasets/jmdu/summ-with-dialogs-t5
### Description
The dataset viewer is not working for dataset jmdu/summ-with-dialogs-t5.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for jmdu/summ-with-dialogs-t5: ### Link
https://huggingface.co/datasets/jmdu/summ-with-dialogs-t5
### Description
The dataset viewer is not working for dataset jmdu/summ-with-dialogs-t5.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-30T14:23:11Z | 2023-01-31T09:45:50Z | 2023-01-31T09:45:50Z | jmdu99 |
1,562,537,122 | Dataset Viewer issue for tkurtulus/thycomments | ### Link
https://huggingface.co/datasets/tkurtulus/thycomments
### Description
The dataset viewer is not working for dataset tkurtulus/thycomments.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for tkurtulus/thycomments: ### Link
https://huggingface.co/datasets/tkurtulus/thycomments
### Description
The dataset viewer is not working for dataset tkurtulus/thycomments.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-30T14:19:20Z | 2023-02-03T10:17:14Z | 2023-02-03T10:17:13Z | tolgakurtuluss |
1,562,524,965 | Simplify the deployment to kubernetes | For now, there is no way to know if a job that has been started is still alive or if it has crashed silently (if the pod had an OOM error, for example).
When deploying a new version, it's annoying because we have to first stop all the workers (replicas=0), then cancel the jobs (STARTED => WAITING), then deploy.
Instead of this, we could create a new kubernetes "Job" (as the one that migrates MongoDB) that would cancel all the jobs in "STARTED" status, before launching the workers again.
thanks @XciD for the idea. | Simplify the deployment to kubernetes: For now, there is no way to know if a job that has been started is still alive or if it has crashed silently (if the pod had an OOM error, for example).
When deploying a new version, it's annoying because we have to first stop all the workers (replicas=0), then cancel the jobs (STARTED => WAITING), then deploy.
Instead of this, we could create a new kubernetes "Job" (as the one that migrates MongoDB) that would cancel all the jobs in "STARTED" status, before launching the workers again.
thanks @XciD for the idea. | closed | 2023-01-30T14:12:21Z | 2023-02-13T11:19:40Z | 2023-02-13T11:19:40Z | severo |
1,562,464,777 | feat: πΈ update docker images | null | feat: πΈ update docker images: | closed | 2023-01-30T13:35:02Z | 2023-01-30T13:46:19Z | 2023-01-30T13:46:17Z | severo |
1,562,446,861 | fix: π add a missing default value for org name in admin/ | Thanks @lhoestq for spotting the bug | fix: π add a missing default value for org name in admin/: Thanks @lhoestq for spotting the bug | closed | 2023-01-30T13:24:31Z | 2023-01-30T13:46:30Z | 2023-01-30T13:46:29Z | severo |
1,562,260,708 | Allow codecov update to fail | null | Allow codecov update to fail: | closed | 2023-01-30T11:26:33Z | 2023-01-30T11:29:38Z | 2023-01-30T11:29:37Z | severo |
1,562,221,443 | fix: π don't check if dataset is supported when we know it is | As we are running a loop of updates on supported datasets, it's useless to check if the dataset is supported inside the `update_dataset` method. | fix: π don't check if dataset is supported when we know it is: As we are running a loop of updates on supported datasets, it's useless to check if the dataset is supported inside the `update_dataset` method. | closed | 2023-01-30T11:06:49Z | 2023-01-30T13:28:14Z | 2023-01-30T13:28:13Z | severo |
1,561,954,470 | Refactoring for Private hub | null | Refactoring for Private hub : | closed | 2023-01-30T08:18:13Z | 2023-01-30T15:03:40Z | 2023-01-30T15:03:39Z | rtrompier |
1,561,020,630 | Dataset Viewer issue for rfernand/basic_sentence_transforms | ### Link
https://huggingface.co/datasets/rfernand/basic_sentence_transforms
### Description
The dataset viewer is not working for dataset rfernand/basic_sentence_transforms.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for rfernand/basic_sentence_transforms: ### Link
https://huggingface.co/datasets/rfernand/basic_sentence_transforms
### Description
The dataset viewer is not working for dataset rfernand/basic_sentence_transforms.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-28T21:30:01Z | 2023-02-28T15:42:37Z | 2023-02-28T15:42:36Z | rfernand2 |
1,560,513,742 | ci: π‘ build and push the docker images only on push to main | See #712 | ci: π‘ build and push the docker images only on push to main: See #712 | closed | 2023-01-27T22:32:23Z | 2023-01-27T22:32:40Z | 2023-01-27T22:32:39Z | severo |
1,559,882,783 | ci: π‘ build the images before running the e2e tests | See
https://github.com/huggingface/datasets-server/issues/712#issuecomment-1406530448. It removes the need to edit the chart/docker-images.yaml file. | ci: π‘ build the images before running the e2e tests: See
https://github.com/huggingface/datasets-server/issues/712#issuecomment-1406530448. It removes the need to edit the chart/docker-images.yaml file. | closed | 2023-01-27T14:43:12Z | 2023-01-27T22:21:05Z | 2023-01-27T22:21:03Z | severo |
1,559,839,662 | Update datasets to 2.9.0 | Close #709. | Update datasets to 2.9.0: Close #709. | closed | 2023-01-27T14:14:42Z | 2023-01-30T09:14:26Z | 2023-01-30T09:14:25Z | albertvillanova |
1,559,679,277 | Update poetry lock file format to 2.0 | This PR locks poetry files with format 2.0.
Close #710, close #711.
Supersede and duplicate (but not in fork) of:
- #711 | Update poetry lock file format to 2.0: This PR locks poetry files with format 2.0.
Close #710, close #711.
Supersede and duplicate (but not in fork) of:
- #711 | closed | 2023-01-27T12:24:06Z | 2023-01-27T13:52:00Z | 2023-01-27T13:48:56Z | albertvillanova |
1,559,519,260 | Trigger CI by PRs from forks | Fix #712.
@severo could you please check we have all the required workflows (and no more than the required ones) to be triggered by PRs from forks? | Trigger CI by PRs from forks: Fix #712.
@severo could you please check we have all the required workflows (and no more than the required ones) to be triggered by PRs from forks? | closed | 2023-01-27T10:29:09Z | 2023-01-30T13:38:20Z | 2023-01-30T13:38:20Z | albertvillanova |
1,559,497,733 | Fix CI for PR from a fork | The CI does not run properly when a PR is made from a fork. | Fix CI for PR from a fork: The CI does not run properly when a PR is made from a fork. | closed | 2023-01-27T10:14:01Z | 2023-01-30T13:38:21Z | 2023-01-30T13:38:21Z | albertvillanova |
1,559,409,605 | Update poetry lock file format to 2.0 | This PR locks poetry files with format 2.0.
Close #710. | Update poetry lock file format to 2.0: This PR locks poetry files with format 2.0.
Close #710. | closed | 2023-01-27T09:10:18Z | 2023-01-27T13:37:32Z | 2023-01-27T13:34:59Z | albertvillanova |
1,559,343,044 | Update poetry lock file format to 2.0 | Since `poetry` 1.3.0 (9 Dec 2022), a new lock file format is used (version 2.0). See release notes: https://github.com/python-poetry/poetry/releases/tag/1.3.0
We should update our poetry lock files to the new format. | Update poetry lock file format to 2.0: Since `poetry` 1.3.0 (9 Dec 2022), a new lock file format is used (version 2.0). See release notes: https://github.com/python-poetry/poetry/releases/tag/1.3.0
We should update our poetry lock files to the new format. | closed | 2023-01-27T08:11:36Z | 2023-01-27T13:48:58Z | 2023-01-27T13:48:57Z | albertvillanova |
1,559,320,736 | Update datasets to 2.9.0 | After 2.9.0 `datasets` release, update dependencies on it. | Update datasets to 2.9.0: After 2.9.0 `datasets` release, update dependencies on it. | closed | 2023-01-27T07:50:55Z | 2023-01-30T09:14:26Z | 2023-01-30T09:14:26Z | albertvillanova |
1,558,682,470 | feat: πΈ add a /backfill admin endpoint | The logic is very basic: it updates all the datasets of the Hub, with a low priority. Note that most of the jobs will be skipped, because the response will already be in the cache.
We might want to take a more detailed approach later to reduce the number of unnecessary jobs by specifically creating jobs for the missing data only.
Apart of this, the PR also fixes the creation of children jobs: the priority is preserved (ie low priority jobs created low priority children jobs) | feat: πΈ add a /backfill admin endpoint: The logic is very basic: it updates all the datasets of the Hub, with a low priority. Note that most of the jobs will be skipped, because the response will already be in the cache.
We might want to take a more detailed approach later to reduce the number of unnecessary jobs by specifically creating jobs for the missing data only.
Apart of this, the PR also fixes the creation of children jobs: the priority is preserved (ie low priority jobs created low priority children jobs) | closed | 2023-01-26T19:45:15Z | 2023-01-30T13:48:13Z | 2023-01-27T10:20:53Z | severo |
1,558,594,213 | fix: π fix migration script | See https://github.com/huggingface/datasets-server/pull/705#issuecomment-1405433714 | fix: π fix migration script: See https://github.com/huggingface/datasets-server/pull/705#issuecomment-1405433714 | closed | 2023-01-26T18:38:59Z | 2023-01-26T19:10:39Z | 2023-01-26T18:46:04Z | severo |
1,558,464,920 | feat: πΈ make /first-rows depend on /split-names, not /splits | `/splits` fails if any config is broken. By depending on `/split-names`, the working configs will have `/first-rows`.
Follow-up to #702 | feat: πΈ make /first-rows depend on /split-names, not /splits: `/splits` fails if any config is broken. By depending on `/split-names`, the working configs will have `/first-rows`.
Follow-up to #702 | closed | 2023-01-26T16:58:15Z | 2023-01-26T17:52:23Z | 2023-01-26T17:52:21Z | severo |
1,558,382,002 | Add priority field to queue | In this PR:
- the jobs have a new field called `priority`: `normal` (default) or `low`
- the `normal` jobs are fetched before the `low` ones (but respecting `max_jobs_per_namespace`)
- when updating a job, i.e., if a dataset has been updated again, the priority is inherited, never lowered, even if the priority field is set. This way: when launching backfill (with a low priority) on a dataset that was already waiting for update, we don't deprioritize it.
- same logic with the existing "force" field: when updating a job, if force was true, we maintain it. | Add priority field to queue: In this PR:
- the jobs have a new field called `priority`: `normal` (default) or `low`
- the `normal` jobs are fetched before the `low` ones (but respecting `max_jobs_per_namespace`)
- when updating a job, i.e., if a dataset has been updated again, the priority is inherited, never lowered, even if the priority field is set. This way: when launching backfill (with a low priority) on a dataset that was already waiting for update, we don't deprioritize it.
- same logic with the existing "force" field: when updating a job, if force was true, we maintain it. | closed | 2023-01-26T16:03:13Z | 2023-01-26T18:57:25Z | 2023-01-26T18:12:57Z | severo |
1,558,169,289 | Dataset Viewer issue for matchbench/semi-text-w | null | Dataset Viewer issue for matchbench/semi-text-w: | closed | 2023-01-26T13:49:06Z | 2023-01-31T15:38:12Z | 2023-01-31T15:38:12Z | ScHh0625 |
1,557,786,300 | ci: π‘ launch CI when libcommon has been modified | null | ci: π‘ launch CI when libcommon has been modified: | closed | 2023-01-26T08:39:53Z | 2023-01-26T08:59:36Z | 2023-01-26T08:59:35Z | severo |
1,557,063,929 | Configs and splits | See https://github.com/huggingface/datasets-server/issues/701
This PR does:
- create a new endpoint `/config-names`. It gives the list of config names for a dataset
- create a new endpoint `/split-names`. It gives the list of split names for a config (not for a dataset, which is the difference with /splits for now). Note that /split-names, as /splits, may depend on the streaming mode and might fail for this reason.
- introduce a new "input-type": "config" in the processing steps. Before, a processing step was "dataset" (the input is a dataset name) or "split" (the inputs are dataset, config, and split). Now, we enable "config" (the inputs are dataset and config) for the new endpoint `/split-names`.
Once this PR is merged, the plan is to:
- fill the cache for the two new endpoints for all the datasets on the Hub (might take some hours to complete),
- create a PR on moonlanding to use these two new endpoints instead of `/splits`
- remove the `/splits` processing step, delete the cache for this endpoint, make the endpoint an alias to `/split-names` and add the ability for `/split-names` to concatenate the responses for all the configs if the only parameter is `dataset` (no `config` passed) | Configs and splits: See https://github.com/huggingface/datasets-server/issues/701
This PR does:
- create a new endpoint `/config-names`. It gives the list of config names for a dataset
- create a new endpoint `/split-names`. It gives the list of split names for a config (not for a dataset, which is the difference with /splits for now). Note that /split-names, as /splits, may depend on the streaming mode and might fail for this reason.
- introduce a new "input-type": "config" in the processing steps. Before, a processing step was "dataset" (the input is a dataset name) or "split" (the inputs are dataset, config, and split). Now, we enable "config" (the inputs are dataset and config) for the new endpoint `/split-names`.
Once this PR is merged, the plan is to:
- fill the cache for the two new endpoints for all the datasets on the Hub (might take some hours to complete),
- create a PR on moonlanding to use these two new endpoints instead of `/splits`
- remove the `/splits` processing step, delete the cache for this endpoint, make the endpoint an alias to `/split-names` and add the ability for `/split-names` to concatenate the responses for all the configs if the only parameter is `dataset` (no `config` passed) | closed | 2023-01-25T17:59:32Z | 2023-01-27T10:01:52Z | 2023-01-26T14:14:32Z | severo |
1,556,783,498 | Dataset Viewer issue for bigbio/pubmed_qa | ### Link
https://huggingface.co/datasets/bigbio/pubmed_qa
### Description
this config fails to return the split names: pubmed_qa_labeled_fold6_bigbio_qa
but we get an error even though the other configs work | Dataset Viewer issue for bigbio/pubmed_qa: ### Link
https://huggingface.co/datasets/bigbio/pubmed_qa
### Description
this config fails to return the split names: pubmed_qa_labeled_fold6_bigbio_qa
but we get an error even though the other configs work | closed | 2023-01-25T14:50:07Z | 2023-02-13T12:38:23Z | 2023-02-13T12:38:22Z | lhoestq |
1,556,698,931 | Update hfh | In particular: we now use the new [`list_repo_refs`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs) method, and the new [`revision`](https://github.com/huggingface/huggingface_hub/pull/1293) parameter when we create the `refs/convert/parquet` branch. | Update hfh: In particular: we now use the new [`list_repo_refs`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs) method, and the new [`revision`](https://github.com/huggingface/huggingface_hub/pull/1293) parameter when we create the `refs/convert/parquet` branch. | closed | 2023-01-25T13:54:41Z | 2023-01-25T15:28:32Z | 2023-01-25T14:22:01Z | severo |
1,556,392,791 | refactor: π‘ set libcommon as an "editable" dependency | All the jobs, services and workers share the same version of libs/libcommon, which is the current version.
Before, we had to update the libcommon version, run poetry build, update the version in the other projects' pyproject.toml and poetry update it.
The workflow when updating libcommon will be a lot simpler now. Also: vscode will be able to follow the reference to the source of the libcommon code, instead of showing the packaged dependency in .venv/.
Caveat: this also means that now, modifying something in libcommon will affect all the jobs, services and workers, so we need to have to be more careful. This also means that more CI jobs will be run on every libcommon update, and that when deploying on prod, all the images will be updated and redeployed. | refactor: π‘ set libcommon as an "editable" dependency: All the jobs, services and workers share the same version of libs/libcommon, which is the current version.
Before, we had to update the libcommon version, run poetry build, update the version in the other projects' pyproject.toml and poetry update it.
The workflow when updating libcommon will be a lot simpler now. Also: vscode will be able to follow the reference to the source of the libcommon code, instead of showing the packaged dependency in .venv/.
Caveat: this also means that now, modifying something in libcommon will affect all the jobs, services and workers, so we need to have to be more careful. This also means that more CI jobs will be run on every libcommon update, and that when deploying on prod, all the images will be updated and redeployed. | closed | 2023-01-25T10:18:52Z | 2023-01-25T10:33:38Z | 2023-01-25T10:33:37Z | severo |
1,556,292,696 | feat: πΈ block more datasets in /parquet-and-dataset-info | null | feat: πΈ block more datasets in /parquet-and-dataset-info: | closed | 2023-01-25T09:10:09Z | 2023-01-25T09:10:28Z | 2023-01-25T09:10:27Z | severo |
1,556,286,349 | feat: πΈ reduce logs level from DEBUG to INFO | cc @co42 | feat: πΈ reduce logs level from DEBUG to INFO: cc @co42 | closed | 2023-01-25T09:05:30Z | 2023-01-25T09:05:48Z | 2023-01-25T09:05:46Z | severo |
1,553,947,576 | Add a new route: /cache-reports-with-content | Also: add a missing field in an index | Add a new route: /cache-reports-with-content: Also: add a missing field in an index | closed | 2023-01-23T22:41:38Z | 2023-01-23T23:10:01Z | 2023-01-23T23:10:00Z | severo |
1,553,438,978 | feat: πΈ launch children jobs even when skipped | if we re-run a "DAG", all the steps will be processed, even if the first ones are skipped because the result is already in the cache.
It will fix the issue with https://github.com/huggingface/datasets-server/pull/694#issuecomment-1400568759 (we will update the datasets in the queue, and remove the duplicates, without having to re-run already computed responses). | feat: πΈ launch children jobs even when skipped: if we re-run a "DAG", all the steps will be processed, even if the first ones are skipped because the result is already in the cache.
It will fix the issue with https://github.com/huggingface/datasets-server/pull/694#issuecomment-1400568759 (we will update the datasets in the queue, and remove the duplicates, without having to re-run already computed responses). | closed | 2023-01-23T16:59:08Z | 2023-01-23T17:40:20Z | 2023-01-23T17:40:18Z | severo |
1,553,239,965 | feat: πΈ replace Queue.add_job with Queue.upsert_job | upsert_job ensures only one waiting job for the same set of parameters. On every call to upsert_job, all the previous waiting jobs for the same set of parameters are canceled, and a new one is created with a new "created_at" date, which means it will be put at the end of the queue. It will help to fight against datasets that are updated very often (eg, every minute): they will be treated only when there are available workers.
It's a quick PR to fix the issue that the queues are increasing faster than the availability of the workers and that most of these jobs will be later skipped. Better to reduce the number of results in the queries (by reducing the number of waiting jobs). | feat: πΈ replace Queue.add_job with Queue.upsert_job: upsert_job ensures only one waiting job for the same set of parameters. On every call to upsert_job, all the previous waiting jobs for the same set of parameters are canceled, and a new one is created with a new "created_at" date, which means it will be put at the end of the queue. It will help to fight against datasets that are updated very often (eg, every minute): they will be treated only when there are available workers.
It's a quick PR to fix the issue that the queues are increasing faster than the availability of the workers and that most of these jobs will be later skipped. Better to reduce the number of results in the queries (by reducing the number of waiting jobs). | closed | 2023-01-23T14:54:56Z | 2023-01-23T17:39:58Z | 2023-01-23T15:17:23Z | severo |
1,552,545,760 | Update index.mdx | updated the following text's linke:
"give the [Datasets Server repository] a βοΈ if you're interested in the latest updates!"
To correct link address that corresponds to repo: (https://github.com/huggingface/datasets-server) | Update index.mdx: updated the following text's linke:
"give the [Datasets Server repository] a βοΈ if you're interested in the latest updates!"
To correct link address that corresponds to repo: (https://github.com/huggingface/datasets-server) | closed | 2023-01-23T05:36:32Z | 2023-01-26T16:21:59Z | 2023-01-26T16:19:21Z | keleffew |
1,551,873,576 | Space Viewer issue for [dataset name] | ### Link
https://huggingface.co/spaces/ivelin/ui-refexp
### Description
Hello HF team,
Not sure if this issue belongs here, but I could not find a dedicatred repo for Spaces Server issue.
My space referenced in the link has been working fine for a few weeks. However this morning it started erroring with the message below without any code changes on my end that I can think of. Could you please advise what I need to do to fix it:
Regards:
```
Successfully built validators ffmpy python-multipart
Installing collected packages: rfc3986, pydub, ffmpy, zipp, websockets, watchdog, uc-micro-py, tzdata, tornado, toolz, toml, sniffio, smmap, semver, python-multipart, pyrsistent, pyparsing, pympler, pygments, pydantic, pycryptodome, pkgutil-resolve-name, orjson, mdurl, markupsafe, kiwisolver, h11, fonttools, entrypoints, decorator, cycler, contourpy, cachetools, blinker, backports.zoneinfo, validators, uvicorn, pytz-deprecation-shim, matplotlib, markdown-it-py, linkify-it-py, jinja2, importlib-resources, importlib-metadata, gitdb, anyio, tzlocal, starlette, rich, pydeck, mdit-py-plugins, jsonschema, httpcore, gitpython, httpx, fastapi, altair, streamlit, gradio
Successfully installed altair-4.2.0 anyio-3.6.2 backports.zoneinfo-0.2.1 blinker-1.5 cachetools-5.2.1 contourpy-1.0.7 cycler-0.11.0 decorator-5.1.1 entrypoints-0.4 fastapi-0.89.1 ffmpy-0.3.0 fonttools-4.38.0 gitdb-4.0.10 gitpython-3.1.30 gradio-3.16.1 h11-0.14.0 httpcore-0.16.3 httpx-0.23.3 importlib-metadata-6.0.0 importlib-resources-5.10.2 jinja2-3.1.2 jsonschema-4.17.3 kiwisolver-1.4.4 linkify-it-py-1.0.3 markdown-it-py-2.1.0 markupsafe-2.1.2 matplotlib-3.6.3 mdit-py-plugins-0.3.3 mdurl-0.1.2 orjson-3.8.5 pkgutil-resolve-name-1.3.10 pycryptodome-3.16.0 pydantic-1.10.4 pydeck-0.8.0 pydub-0.25.1 pygments-2.14.0 pympler-1.0.1 pyparsing-3.0.9 pyrsistent-0.19.3 python-multipart-0.0.5 pytz-deprecation-shim-0.1.0.post0 rfc3986-1.5.0 rich-13.2.0 semver-2.13.0 smmap-5.0.0 sniffio-1.3.0 starlette-0.22.0 streamlit-1.17.0 toml-0.10.2 toolz-0.12.0 tornado-6.2 tzdata-2022.7 tzlocal-4.2 uc-micro-py-1.0.1 uvicorn-0.20.0 validators-0.20.0 watchdog-2.2.1 websockets-10.4 zipp-3.11.0
WARNING: You are using pip version 22.0.2; however, version 22.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
DONE 16.9s
--> COPY --link --chown=1000 --from=lfs /app /home/user/app
DONE 0.0s
--> COPY --link --chown=1000 ./ /home/user/app
DONE 0.1s
--> Pushing image
DONE 21.2s
--> Exporting cache
DONE 4.4s
===== Application Startup at 2023-01-21 17:56:28 =====
Loading model checkpoint: ivelin/donut-refexp-combined-v1
Traceback (most recent call last):
File "app.py", line 12, in
processor = DonutProcessor.from_pretrained(pretrained_repo_name)
File "/home/user/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 184, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 228, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/image_processing_auto.py", line 307, in from_pretrained
config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/image_processing_utils.py", line 257, in get_image_processor_dict
resolved_image_processor_file = cached_file(
File "/home/user/.local/lib/python3.8/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1150, in hf_hub_download
with open(ref_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.cache/huggingface/hub/models--ivelin--donut-refexp-combined-v1/refs/main'
``` | Space Viewer issue for [dataset name]: ### Link
https://huggingface.co/spaces/ivelin/ui-refexp
### Description
Hello HF team,
Not sure if this issue belongs here, but I could not find a dedicatred repo for Spaces Server issue.
My space referenced in the link has been working fine for a few weeks. However this morning it started erroring with the message below without any code changes on my end that I can think of. Could you please advise what I need to do to fix it:
Regards:
```
Successfully built validators ffmpy python-multipart
Installing collected packages: rfc3986, pydub, ffmpy, zipp, websockets, watchdog, uc-micro-py, tzdata, tornado, toolz, toml, sniffio, smmap, semver, python-multipart, pyrsistent, pyparsing, pympler, pygments, pydantic, pycryptodome, pkgutil-resolve-name, orjson, mdurl, markupsafe, kiwisolver, h11, fonttools, entrypoints, decorator, cycler, contourpy, cachetools, blinker, backports.zoneinfo, validators, uvicorn, pytz-deprecation-shim, matplotlib, markdown-it-py, linkify-it-py, jinja2, importlib-resources, importlib-metadata, gitdb, anyio, tzlocal, starlette, rich, pydeck, mdit-py-plugins, jsonschema, httpcore, gitpython, httpx, fastapi, altair, streamlit, gradio
Successfully installed altair-4.2.0 anyio-3.6.2 backports.zoneinfo-0.2.1 blinker-1.5 cachetools-5.2.1 contourpy-1.0.7 cycler-0.11.0 decorator-5.1.1 entrypoints-0.4 fastapi-0.89.1 ffmpy-0.3.0 fonttools-4.38.0 gitdb-4.0.10 gitpython-3.1.30 gradio-3.16.1 h11-0.14.0 httpcore-0.16.3 httpx-0.23.3 importlib-metadata-6.0.0 importlib-resources-5.10.2 jinja2-3.1.2 jsonschema-4.17.3 kiwisolver-1.4.4 linkify-it-py-1.0.3 markdown-it-py-2.1.0 markupsafe-2.1.2 matplotlib-3.6.3 mdit-py-plugins-0.3.3 mdurl-0.1.2 orjson-3.8.5 pkgutil-resolve-name-1.3.10 pycryptodome-3.16.0 pydantic-1.10.4 pydeck-0.8.0 pydub-0.25.1 pygments-2.14.0 pympler-1.0.1 pyparsing-3.0.9 pyrsistent-0.19.3 python-multipart-0.0.5 pytz-deprecation-shim-0.1.0.post0 rfc3986-1.5.0 rich-13.2.0 semver-2.13.0 smmap-5.0.0 sniffio-1.3.0 starlette-0.22.0 streamlit-1.17.0 toml-0.10.2 toolz-0.12.0 tornado-6.2 tzdata-2022.7 tzlocal-4.2 uc-micro-py-1.0.1 uvicorn-0.20.0 validators-0.20.0 watchdog-2.2.1 websockets-10.4 zipp-3.11.0
WARNING: You are using pip version 22.0.2; however, version 22.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
DONE 16.9s
--> COPY --link --chown=1000 --from=lfs /app /home/user/app
DONE 0.0s
--> COPY --link --chown=1000 ./ /home/user/app
DONE 0.1s
--> Pushing image
DONE 21.2s
--> Exporting cache
DONE 4.4s
===== Application Startup at 2023-01-21 17:56:28 =====
Loading model checkpoint: ivelin/donut-refexp-combined-v1
Traceback (most recent call last):
File "app.py", line 12, in
processor = DonutProcessor.from_pretrained(pretrained_repo_name)
File "/home/user/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 184, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 228, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/image_processing_auto.py", line 307, in from_pretrained
config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/image_processing_utils.py", line 257, in get_image_processor_dict
resolved_image_processor_file = cached_file(
File "/home/user/.local/lib/python3.8/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1150, in hf_hub_download
with open(ref_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.cache/huggingface/hub/models--ivelin--donut-refexp-combined-v1/refs/main'
``` | closed | 2023-01-21T18:13:59Z | 2023-01-23T14:04:52Z | 2023-01-23T10:16:12Z | ivelin |
1,551,417,698 | feat: πΈ add support for pdf2image | β
Closes: #688 | feat: πΈ add support for pdf2image: β
Closes: #688 | closed | 2023-01-20T20:33:52Z | 2023-01-23T10:17:22Z | 2023-01-23T10:17:20Z | severo |
1,551,373,294 | feat: πΈ block more datasets, and allow more /first-rows per ns | null | feat: πΈ block more datasets, and allow more /first-rows per ns: | closed | 2023-01-20T19:59:36Z | 2023-01-20T20:10:52Z | 2023-01-20T20:10:51Z | severo |
1,551,231,814 | Dataset Viewer issue for dadosdq/wallbed_dataset | ### Link
https://huggingface.co/datasets/dadosdq/wallbed_dataset
### Description
The dataset viewer is not working for dataset dadosdq/wallbed_dataset.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for dadosdq/wallbed_dataset: ### Link
https://huggingface.co/datasets/dadosdq/wallbed_dataset
### Description
The dataset viewer is not working for dataset dadosdq/wallbed_dataset.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-20T17:43:45Z | 2023-01-20T19:48:37Z | 2023-01-20T19:48:37Z | dadobtx |
1,551,132,436 | Add pdf2image as preinstalled package | Needed for: https://huggingface.co/datasets/jordyvl/unit-test_PDFfolder (the installation instructions for pdf2image can be found there) | Add pdf2image as preinstalled package: Needed for: https://huggingface.co/datasets/jordyvl/unit-test_PDFfolder (the installation instructions for pdf2image can be found there) | closed | 2023-01-20T16:25:49Z | 2023-01-23T21:28:50Z | 2023-01-23T16:06:03Z | mariosasko |
1,550,933,877 | feat: πΈ quick and dirty POC for the random rows endpoint | null | feat: πΈ quick and dirty POC for the random rows endpoint: | closed | 2023-01-20T14:22:57Z | 2023-03-02T08:58:06Z | 2023-03-01T10:36:26Z | severo |
1,550,835,078 | chore: π€ update resources | the splits queue is now empty, we can reduce the number of workers. Block new datasets for parquet for now. | chore: π€ update resources: the splits queue is now empty, we can reduce the number of workers. Block new datasets for parquet for now. | closed | 2023-01-20T13:18:49Z | 2023-01-20T13:19:11Z | 2023-01-20T13:19:10Z | severo |
1,549,887,151 | Dataset Viewer issue for ivelin/rico_sca_refexp_synthetic_saved | ### Link
https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic_saved
### Description
The dataset viewer is not working for dataset ivelin/rico_sca_refexp_synthetic_saved.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for ivelin/rico_sca_refexp_synthetic_saved: ### Link
https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic_saved
### Description
The dataset viewer is not working for dataset ivelin/rico_sca_refexp_synthetic_saved.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-19T20:18:02Z | 2023-01-20T14:45:30Z | 2023-01-20T08:56:41Z | ivelin |
1,549,455,221 | fix: π fix memory specification + increase pods in /parquet | error was: Warning:
spec.template.spec.containers[0].resources.requests[memory]: fractional byte value "107374182400m" is invalid, must be an integer | fix: π fix memory specification + increase pods in /parquet: error was: Warning:
spec.template.spec.containers[0].resources.requests[memory]: fractional byte value "107374182400m" is invalid, must be an integer | closed | 2023-01-19T16:00:52Z | 2023-01-19T16:23:52Z | 2023-01-19T16:23:51Z | severo |
1,549,403,811 | feat: πΈ increase resources` | null | feat: πΈ increase resources`: | closed | 2023-01-19T15:33:43Z | 2023-01-19T15:34:21Z | 2023-01-19T15:34:20Z | severo |
1,549,286,654 | feat: πΈ increase resources | null | feat: πΈ increase resources: | closed | 2023-01-19T14:35:13Z | 2023-01-19T14:35:49Z | 2023-01-19T14:35:48Z | severo |
1,549,251,776 | feat: πΈ increase number of workers for a moment | null | feat: πΈ increase number of workers for a moment: | closed | 2023-01-19T14:18:04Z | 2023-01-19T14:18:22Z | 2023-01-19T14:18:20Z | severo |
1,548,903,615 | chore: π€ add --no-cache (poetry) and --no-cache-dir (pip) | to reduce the size of the docker images
Thanks @XciD | chore: π€ add --no-cache (poetry) and --no-cache-dir (pip): to reduce the size of the docker images
Thanks @XciD | closed | 2023-01-19T10:39:24Z | 2023-01-19T13:24:16Z | 2023-01-19T13:24:14Z | severo |
1,548,224,075 | feat: πΈ add /sizes | null | feat: πΈ add /sizes: | closed | 2023-01-18T22:30:26Z | 2023-01-19T10:34:10Z | 2023-01-19T10:34:09Z | severo |
1,534,681,151 | ci: π‘ fix app token | see https://github.com/huggingface/moon-landing/pull/5106 (internal) | ci: π‘ fix app token: see https://github.com/huggingface/moon-landing/pull/5106 (internal) | closed | 2023-01-16T10:26:01Z | 2023-01-16T12:30:04Z | 2023-01-16T12:30:03Z | severo |
1,532,968,510 | Create children in generic worker | Extracting (from https://github.com/huggingface/datasets-server/pull/670) the logic to create the children jobs | Create children in generic worker: Extracting (from https://github.com/huggingface/datasets-server/pull/670) the logic to create the children jobs | closed | 2023-01-13T22:02:51Z | 2023-01-16T12:53:55Z | 2023-01-16T12:53:54Z | severo |
1,529,177,289 | fix: π only check webhook payload for what we are interested in | checking the payload in details (eg if the URL field is well formed) while not using it afterward is not useful. It even lead to breaking the webhook after https://github.com/huggingface/moon-landing/pull/4477 (internal link) where the "url" field format had changed.
cc @SBrandeis @coyotte508 fyi | fix: π only check webhook payload for what we are interested in: checking the payload in details (eg if the URL field is well formed) while not using it afterward is not useful. It even lead to breaking the webhook after https://github.com/huggingface/moon-landing/pull/4477 (internal link) where the "url" field format had changed.
cc @SBrandeis @coyotte508 fyi | closed | 2023-01-11T14:39:43Z | 2023-01-11T15:01:12Z | 2023-01-11T15:01:11Z | severo |
1,520,419,410 | feat: πΈ allow more concurrent jobs fo the same namespace | needed today for allenai, see #674 | feat: πΈ allow more concurrent jobs fo the same namespace: needed today for allenai, see #674 | closed | 2023-01-05T09:41:23Z | 2023-01-05T09:41:36Z | 2023-01-05T09:41:35Z | severo |
1,520,382,528 | Dataset Viewer issue for allenai/soda | ### Link
https://huggingface.co/datasets/allenai/soda
### Description
The dataset viewer is not working for dataset allenai/soda.
Error details:
```
Error code: ResponseNotReady
```
Please help! ππ» | Dataset Viewer issue for allenai/soda: ### Link
https://huggingface.co/datasets/allenai/soda
### Description
The dataset viewer is not working for dataset allenai/soda.
Error details:
```
Error code: ResponseNotReady
```
Please help! ππ» | closed | 2023-01-05T09:16:14Z | 2023-01-05T09:43:58Z | 2023-01-05T09:40:20Z | skywalker023 |
1,518,706,736 | Dataset Viewer issue for xfact/nq-dpr | ### Link
https://huggingface.co/datasets/xfact/nq-dpr
### Description
The dataset viewer is not working for dataset xfact/nq-dpr.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for xfact/nq-dpr: ### Link
https://huggingface.co/datasets/xfact/nq-dpr
### Description
The dataset viewer is not working for dataset xfact/nq-dpr.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-01-04T10:21:02Z | 2023-01-05T08:46:44Z | 2023-01-05T08:46:44Z | euiyulsong |
1,517,129,352 | feat: πΈ create orchestrator service to run the DAGs | null | feat: πΈ create orchestrator service to run the DAGs: | closed | 2023-01-03T09:28:39Z | 2024-01-26T09:01:25Z | 2023-02-28T16:12:32Z | severo |
1,516,555,986 | feat: πΈ update the HF webhook content | We don't use the new fields for now. | feat: πΈ update the HF webhook content: We don't use the new fields for now. | closed | 2023-01-02T16:31:58Z | 2023-01-02T17:21:53Z | 2023-01-02T17:21:52Z | severo |
1,509,802,334 | Create endpoint /dataset-info | null | Create endpoint /dataset-info: | closed | 2022-12-23T21:45:59Z | 2023-01-18T21:26:26Z | 2023-01-18T21:26:25Z | severo |
1,509,232,092 | chore: π€ speed-up docker build | in the next builds, the cache will be used if only src/ has been modified, which should help mostly with the workers/datasets_based image, because the poetry install of dependencies takes a lot of time. | chore: π€ speed-up docker build: in the next builds, the cache will be used if only src/ has been modified, which should help mostly with the workers/datasets_based image, because the poetry install of dependencies takes a lot of time. | closed | 2022-12-23T11:23:47Z | 2022-12-23T13:14:19Z | 2022-12-23T13:14:19Z | severo |
1,506,436,148 | Split Worker into WorkerLoop, WorkerFactory and Worker | null | Split Worker into WorkerLoop, WorkerFactory and Worker: | closed | 2022-12-21T14:59:31Z | 2022-12-23T10:55:05Z | 2022-12-23T10:55:04Z | severo |
1,505,361,123 | feat: πΈ give each worker its own version + upgrade to 2.0.0 | as datasets library has been upgraded, we want to upgrade the major version of the workers so that jobs are not skipped if re-run on the same commit (erroneous ones can be OK now) | feat: πΈ give each worker its own version + upgrade to 2.0.0: as datasets library has been upgraded, we want to upgrade the major version of the workers so that jobs are not skipped if re-run on the same commit (erroneous ones can be OK now) | closed | 2022-12-20T21:51:54Z | 2022-12-20T22:10:59Z | 2022-12-20T22:10:58Z | severo |