id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
⌀ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,235,528,600 | Prod env | null | Prod env: | closed | 2022-05-13T18:00:44Z | 2022-05-13T18:01:32Z | 2022-05-13T18:01:31Z | severo |
1,235,503,749 | Adapt the sleep time of the workers | When a worker just finished a job, it should ask for another job right away.
But if it has already polled the dataset multiple times and got no job, it should increase the sleep time between two polls, in order to avoid hammering the database.
By the way, it might just be:
- finish a job: directly ask for a new one
- else sleep for a constant (and large) duration, eg. 10s | Adapt the sleep time of the workers: When a worker just finished a job, it should ask for another job right away.
But if it has already polled the dataset multiple times and got no job, it should increase the sleep time between two polls, in order to avoid hammering the database.
By the way, it might just be:
- finish a job: directly ask for a new one
- else sleep for a constant (and large) duration, eg. 10s | closed | 2022-05-13T17:29:55Z | 2022-05-13T18:01:32Z | 2022-05-13T18:01:32Z | severo |
1,235,488,464 | Add a way to gracefully stop the workers | Currently, if we stop the workers:
```
kubectl scale --replicas=0 deploy/datasets-server-prod-datasets-worker
kubectl scale --replicas=0 deploy/datasets-server-prod-splits-worker
```
the started jobs will remain forever and potentially will block other jobs from the same dataset (because of MAX_JOBS_PER_DATASET).
We want:
1. to detect dead jobs (the worker has been removed), and move them to the waiting status again
2. also if possible: clean the workers (ie: stop the current jobs, and move them again to the waiting status - or move to INTERRUPTED state, and create a new waiting one) before stopping the worker | Add a way to gracefully stop the workers: Currently, if we stop the workers:
```
kubectl scale --replicas=0 deploy/datasets-server-prod-datasets-worker
kubectl scale --replicas=0 deploy/datasets-server-prod-splits-worker
```
the started jobs will remain forever and potentially will block other jobs from the same dataset (because of MAX_JOBS_PER_DATASET).
We want:
1. to detect dead jobs (the worker has been removed), and move them to the waiting status again
2. also if possible: clean the workers (ie: stop the current jobs, and move them again to the waiting status - or move to INTERRUPTED state, and create a new waiting one) before stopping the worker | closed | 2022-05-13T17:11:09Z | 2022-09-19T09:56:17Z | 2022-09-19T09:56:16Z | severo |
1,235,232,724 | fix: 🐛 add support for mongodb+srv:// URLs using dnspython | See https://stackoverflow.com/a/71749071/7351594. | fix: 🐛 add support for mongodb+srv:// URLs using dnspython: See https://stackoverflow.com/a/71749071/7351594. | closed | 2022-05-13T13:17:30Z | 2022-05-13T13:18:01Z | 2022-05-13T13:18:00Z | severo |
1,235,088,427 | feat: 🎸 upgrade images to get /prometheus endpoint | null | feat: 🎸 upgrade images to get /prometheus endpoint: | closed | 2022-05-13T10:57:31Z | 2022-05-13T10:57:38Z | 2022-05-13T10:57:37Z | severo |
1,235,017,107 | [infra] Add monitoring to the hub-ephemeral namespace | It does not belong to this project, but it's needed to test the ServiceMonitor (#260)
cc @XciD | [infra] Add monitoring to the hub-ephemeral namespace: It does not belong to this project, but it's needed to test the ServiceMonitor (#260)
cc @XciD | closed | 2022-05-13T09:56:37Z | 2022-09-16T17:42:48Z | 2022-09-16T17:42:48Z | severo |
1,235,014,687 | Add service monitor | null | Add service monitor: | closed | 2022-05-13T09:54:14Z | 2022-05-16T13:51:38Z | 2022-05-16T13:51:37Z | severo |
1,234,180,599 | Support big-bench | see the thread by @lhoestq on Slack: https://huggingface.slack.com/archives/C034N0A7H09/p1652370311934619?thread_ts=1651846540.985739&cid=C034N0A7H09
```
pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"
```
> it has some dependencies though - just make sure it ’s compatible with what you have already
| Support big-bench: see the thread by @lhoestq on Slack: https://huggingface.slack.com/archives/C034N0A7H09/p1652370311934619?thread_ts=1651846540.985739&cid=C034N0A7H09
```
pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"
```
> it has some dependencies though - just make sure it ’s compatible with what you have already
| closed | 2022-05-12T15:47:45Z | 2024-02-02T17:15:44Z | 2024-02-02T17:15:44Z | severo |
1,234,162,186 | Add metrics | null | Add metrics: | closed | 2022-05-12T15:31:37Z | 2022-05-13T09:30:04Z | 2022-05-13T09:30:03Z | severo |
1,233,720,391 | Setup alerts | Once #250 is done, we will be able to setup alerts when something goes wrong.
See https://prometheus.io/docs/practices/alerting/ for best practices. | Setup alerts: Once #250 is done, we will be able to setup alerts when something goes wrong.
See https://prometheus.io/docs/practices/alerting/ for best practices. | closed | 2022-05-12T09:35:37Z | 2022-09-16T17:42:57Z | 2022-09-16T17:42:57Z | severo |
1,233,708,296 | Implement a heartbeat client for the jobs | from https://prometheus.io/docs/practices/instrumentation/#offline-processing
> Knowing the last time that a system processed something is useful for detecting if it has stalled, but it is very localised information. A better approach is to send a heartbeat through the system: some dummy item that gets passed all the way through and includes the timestamp when it was inserted. Each stage can export the most recent heartbeat timestamp it has seen, letting you know how long items are taking to propagate through the system. For systems that do not have quiet periods where no processing occurs, an explicit heartbeat may not be needed.
In the context of the datasets-server, it might be done by asking to refresh the cache for a very small dataset, every 10 minutes for example. | Implement a heartbeat client for the jobs: from https://prometheus.io/docs/practices/instrumentation/#offline-processing
> Knowing the last time that a system processed something is useful for detecting if it has stalled, but it is very localised information. A better approach is to send a heartbeat through the system: some dummy item that gets passed all the way through and includes the timestamp when it was inserted. Each stage can export the most recent heartbeat timestamp it has seen, letting you know how long items are taking to propagate through the system. For systems that do not have quiet periods where no processing occurs, an explicit heartbeat may not be needed.
In the context of the datasets-server, it might be done by asking to refresh the cache for a very small dataset, every 10 minutes for example. | closed | 2022-05-12T09:25:09Z | 2022-09-16T17:43:25Z | 2022-09-16T17:43:25Z | severo |
1,233,655,686 | Create a custom nginx image? | I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image.
This way, all the services (API, worker, reverse-proxy) would follow the same flow. | Create a custom nginx image?: I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image.
This way, all the services (API, worker, reverse-proxy) would follow the same flow. | closed | 2022-05-12T08:48:12Z | 2022-09-16T17:43:30Z | 2022-09-16T17:43:30Z | severo |
1,233,639,231 | feat: 🎸 use images with datasets 2.2.1 | null | feat: 🎸 use images with datasets 2.2.1: | closed | 2022-05-12T08:35:47Z | 2022-05-12T08:48:35Z | 2022-05-12T08:48:34Z | severo |
1,233,635,900 | feat: 🎸 upgrade datasets to 2.2.1 | null | feat: 🎸 upgrade datasets to 2.2.1: | closed | 2022-05-12T08:32:51Z | 2022-05-12T08:33:26Z | 2022-05-12T08:33:25Z | severo |
1,233,630,927 | Upgrade datasets to 2.2.1 | https://github.com/huggingface/datasets/releases/tag/2.2.1 | Upgrade datasets to 2.2.1: https://github.com/huggingface/datasets/releases/tag/2.2.1 | closed | 2022-05-12T08:28:27Z | 2022-05-12T08:52:02Z | 2022-05-12T08:52:02Z | severo |
1,233,604,146 | Autoscale the worker pods | Once the prod is done (#223), we might want to autoscale the number of worker pods. | Autoscale the worker pods: Once the prod is done (#223), we might want to autoscale the number of worker pods. | closed | 2022-05-12T08:06:54Z | 2022-06-08T08:36:20Z | 2022-06-08T08:36:20Z | severo |
1,233,602,101 | Setup prometheus + grafana | Related to #2
- [x] expose a `/metrics` endpoint using the Prometheus spec in the API, eg using https://github.com/prometheus/client_python - see #258. Beware: cache and queue metrics removed after https://github.com/huggingface/datasets-server/issues/279
- [x] Use a ServiceMonitor in the Helm chart: https://github.com/huggingface/tensorboard-launcher/blob/main/kube/templates/servicemonitor.yaml: see #260
- [x] create a dashboard in grafana. The recommended process is to:
- [x] ensure Grafana can see the metrics: OK, at https://grafana.huggingface.tech/explore?orgId=1&left=%7B%22datasource%22:%22Prometheus%20EKS%20Hub%20Prod%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:false,%22exemplar%22:false,%22expr%22:%22%7Bjob%3D%5C%22datasets-server-prod-api%5C%22,%20__name__%3D~%5C%22.%2B%5C%22%7D%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D
- [x] read the [doc](https://github.com/huggingface/infra/blob/main/projects/monitoring/README.md#dashboard-development-workflow) written by @McPatate
- [x] create a new JSON in https://github.com/huggingface/infra/blob/main/projects/monitoring/grafana/dashboards/hub/, copying one of the existing ones
- [x] create a PR: an ephemeral grafana will be available to test
- [x] tweak the dashboard in the grafana frontend: https://grafana.huggingface.tech/?orgId=1 -> save: will give an error but allow to download the JSON to paste into the PR
- [x] add metrics about the cache and the queue. See discussion in https://github.com/huggingface/datasets-server/issues/279 and work on #310 | Setup prometheus + grafana: Related to #2
- [x] expose a `/metrics` endpoint using the Prometheus spec in the API, eg using https://github.com/prometheus/client_python - see #258. Beware: cache and queue metrics removed after https://github.com/huggingface/datasets-server/issues/279
- [x] Use a ServiceMonitor in the Helm chart: https://github.com/huggingface/tensorboard-launcher/blob/main/kube/templates/servicemonitor.yaml: see #260
- [x] create a dashboard in grafana. The recommended process is to:
- [x] ensure Grafana can see the metrics: OK, at https://grafana.huggingface.tech/explore?orgId=1&left=%7B%22datasource%22:%22Prometheus%20EKS%20Hub%20Prod%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:false,%22exemplar%22:false,%22expr%22:%22%7Bjob%3D%5C%22datasets-server-prod-api%5C%22,%20__name__%3D~%5C%22.%2B%5C%22%7D%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D
- [x] read the [doc](https://github.com/huggingface/infra/blob/main/projects/monitoring/README.md#dashboard-development-workflow) written by @McPatate
- [x] create a new JSON in https://github.com/huggingface/infra/blob/main/projects/monitoring/grafana/dashboards/hub/, copying one of the existing ones
- [x] create a PR: an ephemeral grafana will be available to test
- [x] tweak the dashboard in the grafana frontend: https://grafana.huggingface.tech/?orgId=1 -> save: will give an error but allow to download the JSON to paste into the PR
- [x] add metrics about the cache and the queue. See discussion in https://github.com/huggingface/datasets-server/issues/279 and work on #310 | closed | 2022-05-12T08:05:04Z | 2022-08-03T18:23:02Z | 2022-06-03T13:53:57Z | severo |
1,233,593,210 | Create the prod infrastructure on Kubernetes | - [x] 4 nodes (4 machines t2 2xlarge, 8 vcpu, 32 gb ram)
- [x] NFS -> 4TB - To increase the size later: directly on AWS, eg https://us-east-1.console.aws.amazon.com/fsx/home?region=us-east-1#file-system-details/fs-097afa9688029b62a (terraform does not allow to change the size of a storage once created, to avoid deleting data by error). Also: see "Monitoring" on the same page to monitor and alert in case of low available storage. Alert: https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#alarmsV2:alarm/Low+disk+on+datasets+server?
- [x] Mongo (autoscale, size: 1GB to start) - URL in a secret: `mongodb-url`, with env var `MONGO_URL`: https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#clusters/detail/datasets-server-prod. Beware: it uses a URL with `mongodb+srv://` which was not supported by the code -> fixed here: #263
- [x] secret `hf-token` with env var `HF_TOKEN`
- [x] domain `datasets-server.huggingface.tech`
- [x] own namespace: `datasets-server`
- [x] selector: `role-datasets-server`. Cluster de prod. See https://github.com/huggingface/tensorboard-launcher/blob/03a8c7c8d2f23fe21ec36c7a83ae2a5e7f7a3833/kube/env/prod.yaml#L22-L28
- [x] create the prod environment in the helm chart. resources: ensure the API and nginx have always reserved resources, by setting limit and request to the same value (1?). For the worker: spawn 6? workers per node, ie 24 in total, with resources: limit=1, request=0.01. | Create the prod infrastructure on Kubernetes: - [x] 4 nodes (4 machines t2 2xlarge, 8 vcpu, 32 gb ram)
- [x] NFS -> 4TB - To increase the size later: directly on AWS, eg https://us-east-1.console.aws.amazon.com/fsx/home?region=us-east-1#file-system-details/fs-097afa9688029b62a (terraform does not allow to change the size of a storage once created, to avoid deleting data by error). Also: see "Monitoring" on the same page to monitor and alert in case of low available storage. Alert: https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#alarmsV2:alarm/Low+disk+on+datasets+server?
- [x] Mongo (autoscale, size: 1GB to start) - URL in a secret: `mongodb-url`, with env var `MONGO_URL`: https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#clusters/detail/datasets-server-prod. Beware: it uses a URL with `mongodb+srv://` which was not supported by the code -> fixed here: #263
- [x] secret `hf-token` with env var `HF_TOKEN`
- [x] domain `datasets-server.huggingface.tech`
- [x] own namespace: `datasets-server`
- [x] selector: `role-datasets-server`. Cluster de prod. See https://github.com/huggingface/tensorboard-launcher/blob/03a8c7c8d2f23fe21ec36c7a83ae2a5e7f7a3833/kube/env/prod.yaml#L22-L28
- [x] create the prod environment in the helm chart. resources: ensure the API and nginx have always reserved resources, by setting limit and request to the same value (1?). For the worker: spawn 6? workers per node, ie 24 in total, with resources: limit=1, request=0.01. | closed | 2022-05-12T07:57:07Z | 2022-05-13T18:02:51Z | 2022-05-13T18:02:51Z | severo |
1,232,830,863 | Check if shared /cache is an issue for the workers | All the workers (in the kubernetes infra) share the same `datasets` library directory, both for the data and the modules. We must check if this shared access in read/write mode can lead to inconsistencies.
The alternative would be to create an empty cache for every new worker. | Check if shared /cache is an issue for the workers: All the workers (in the kubernetes infra) share the same `datasets` library directory, both for the data and the modules. We must check if this shared access in read/write mode can lead to inconsistencies.
The alternative would be to create an empty cache for every new worker. | closed | 2022-05-11T15:23:39Z | 2022-06-23T12:18:01Z | 2022-06-23T10:27:58Z | severo |
1,232,792,549 | feat: 🎸 upgrade the docker images to use datasets 2.2.0 | null | feat: 🎸 upgrade the docker images to use datasets 2.2.0: | closed | 2022-05-11T14:58:39Z | 2022-05-11T14:58:53Z | 2022-05-11T14:58:51Z | severo |
1,232,763,262 | feat: 🎸 upgrade datasets to 2.2.0 | Fixes https://github.com/huggingface/datasets-server/issues/243 | feat: 🎸 upgrade datasets to 2.2.0: Fixes https://github.com/huggingface/datasets-server/issues/243 | closed | 2022-05-11T14:40:15Z | 2022-05-11T14:54:57Z | 2022-05-11T14:54:56Z | severo |
1,232,577,566 | Nginx proxy | null | Nginx proxy: | closed | 2022-05-11T12:41:20Z | 2022-05-11T14:13:47Z | 2022-05-11T14:13:06Z | severo |
1,232,227,132 | Create the HF_TOKEN secret in infra | See https://github.com/huggingface/datasets-server/pull/236#discussion_r870009514
As for now, the secret containing the HF_TOKEN is created manually | Create the HF_TOKEN secret in infra: See https://github.com/huggingface/datasets-server/pull/236#discussion_r870009514
As for now, the secret containing the HF_TOKEN is created manually | closed | 2022-05-11T08:44:48Z | 2022-09-19T08:58:12Z | 2022-09-19T08:58:12Z | severo |
1,232,132,676 | Upgrade datasets to 2.2.0 | https://github.com/huggingface/datasets/releases/tag/2.2.0 | Upgrade datasets to 2.2.0: https://github.com/huggingface/datasets/releases/tag/2.2.0 | closed | 2022-05-11T07:24:32Z | 2022-05-11T14:54:56Z | 2022-05-11T14:54:56Z | severo |
1,231,336,429 | Nfs | null | Nfs: | closed | 2022-05-10T15:25:53Z | 2022-05-10T15:52:03Z | 2022-05-10T15:52:02Z | severo |
1,231,323,674 | Setup the users directly in the images, not in Kubernetes? | See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
| Setup the users directly in the images, not in Kubernetes?: See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
| closed | 2022-05-10T15:15:49Z | 2022-09-19T08:57:20Z | 2022-09-19T08:57:20Z | severo |
1,230,849,874 | Kube: restrict the rights | In the deployments, run with a non root user:
```
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
```
Beware: just adding this generates errors (Permission denied) when trying to write to the mounted volumes. We have to mount with writing rights for the user | Kube: restrict the rights: In the deployments, run with a non root user:
```
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
```
Beware: just adding this generates errors (Permission denied) when trying to write to the mounted volumes. We have to mount with writing rights for the user | closed | 2022-05-10T09:06:59Z | 2022-05-11T15:20:39Z | 2022-05-11T15:20:38Z | severo |
1,230,717,705 | Monitor the RAM used by the workers | Since https://github.com/huggingface/datasets-server/pull/236/commits/12646ee6b7bd72e7563aeb0c16dfcf08eace9fb8, the workers loop infinitely (instead of being stopped after the first job, then restarted by pm2 or k8s).
This might lead to increased RAM use. We should:
- [ ] monitor the usage
- [ ] check the code for possible memory leaks (note that part of the code that is executed is out of our control: the datasets scripts)
| Monitor the RAM used by the workers: Since https://github.com/huggingface/datasets-server/pull/236/commits/12646ee6b7bd72e7563aeb0c16dfcf08eace9fb8, the workers loop infinitely (instead of being stopped after the first job, then restarted by pm2 or k8s).
This might lead to increased RAM use. We should:
- [ ] monitor the usage
- [ ] check the code for possible memory leaks (note that part of the code that is executed is out of our control: the datasets scripts)
| closed | 2022-05-10T07:11:38Z | 2022-06-08T08:39:31Z | 2022-06-08T08:39:30Z | severo |
1,229,647,375 | In the CI, test if two instances can be deployed in the same Kube namespace | https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120715631
> Did you try to install it twice? with another domain? it's a good test to see if your helm chart works with multiple instances
we would need to:
- have two files in env/: test1.yaml, test2.yaml, each one with its own domain (datasets-server-test1.us.dev.moon.huggingface.tech, datasets-server2.us.dev.moon.huggingface.tech)
- install both releases in the `hub` namespace
- ensure both are accessible (after a small delay)
- uninstall them | In the CI, test if two instances can be deployed in the same Kube namespace: https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120715631
> Did you try to install it twice? with another domain? it's a good test to see if your helm chart works with multiple instances
we would need to:
- have two files in env/: test1.yaml, test2.yaml, each one with its own domain (datasets-server-test1.us.dev.moon.huggingface.tech, datasets-server2.us.dev.moon.huggingface.tech)
- install both releases in the `hub` namespace
- ensure both are accessible (after a small delay)
- uninstall them | closed | 2022-05-09T12:41:19Z | 2022-09-16T17:44:27Z | 2022-09-16T17:44:27Z | severo |
1,229,363,293 | Move the infra docs/ to Notion | See https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120712847 | Move the infra docs/ to Notion: See https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120712847 | closed | 2022-05-09T08:28:11Z | 2022-07-29T15:47:09Z | 2022-07-29T15:47:09Z | severo |
1,227,954,103 | Add datasets-server-worker to the Kube cluster | see #223 | Add datasets-server-worker to the Kube cluster: see #223 | closed | 2022-05-06T14:39:14Z | 2022-05-11T08:55:39Z | 2022-05-11T08:55:38Z | severo |
1,227,889,662 | Define the number of replicas and uvicorn workers of the API | The API runs on uvicorn, with a number of workers (see `webConcurrency`). And Kubernetes allows running various pods (`replicas`) for the same app.
How to set these two numbers?
I imagine that `replicas` is easier to change dynamically (we scale up or down)
Do you have any advice @XciD? | Define the number of replicas and uvicorn workers of the API: The API runs on uvicorn, with a number of workers (see `webConcurrency`). And Kubernetes allows running various pods (`replicas`) for the same app.
How to set these two numbers?
I imagine that `replicas` is easier to change dynamically (we scale up or down)
Do you have any advice @XciD? | closed | 2022-05-06T13:44:11Z | 2022-09-16T17:44:18Z | 2022-09-16T17:44:18Z | severo |
1,227,876,889 | Add a nginx reverse proxy in front of the API | It will allow to:
1. serve the assets directly
2. cache the responses
The `image:` should point to the official nginx docker image
The nginx config can be generated as a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/):
- see https://github.com/huggingface/gitaly-internals/blob/main/kube/gitaly/templates/gitaly/config.yaml
- [mounted](https://github.com/huggingface/gitaly-internals/blob/d7e20a22bac939c1ab49d1a15e5523d2b3015cdd/kube/gitaly/templates/gitaly/statefulset.yaml#L106-L108) as a [volume](https://github.com/huggingface/gitaly-internals/blob/d7e20a22bac939c1ab49d1a15e5523d2b3015cdd/kube/gitaly/templates/gitaly/statefulset.yaml#L124-L128)
- see https://github.com/huggingface/datasets-server/blob/af9a93d4b810c2353de01369ad49f2065272af85/services/api/INSTALL.md for the content of the config (cache + assets/)
| Add a nginx reverse proxy in front of the API: It will allow to:
1. serve the assets directly
2. cache the responses
The `image:` should point to the official nginx docker image
The nginx config can be generated as a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/):
- see https://github.com/huggingface/gitaly-internals/blob/main/kube/gitaly/templates/gitaly/config.yaml
- [mounted](https://github.com/huggingface/gitaly-internals/blob/d7e20a22bac939c1ab49d1a15e5523d2b3015cdd/kube/gitaly/templates/gitaly/statefulset.yaml#L106-L108) as a [volume](https://github.com/huggingface/gitaly-internals/blob/d7e20a22bac939c1ab49d1a15e5523d2b3015cdd/kube/gitaly/templates/gitaly/statefulset.yaml#L124-L128)
- see https://github.com/huggingface/datasets-server/blob/af9a93d4b810c2353de01369ad49f2065272af85/services/api/INSTALL.md for the content of the config (cache + assets/)
| closed | 2022-05-06T13:32:23Z | 2022-05-11T14:14:01Z | 2022-05-11T14:14:00Z | severo |
1,227,871,322 | Add an authentication layer to access the dev environment | See https://github.com/huggingface/moon-landing/blob/2d7150500997f57eba0f137a8cb46ab5678d082d/infra/hub/env/ephemeral.yaml#L33-L34
| Add an authentication layer to access the dev environment: See https://github.com/huggingface/moon-landing/blob/2d7150500997f57eba0f137a8cb46ab5678d082d/infra/hub/env/ephemeral.yaml#L33-L34
| closed | 2022-05-06T13:27:49Z | 2022-09-16T17:44:35Z | 2022-09-16T17:44:35Z | severo |
1,227,835,539 | Apply the migrations to the mongodb databases | We can do it using a container launched as a [initContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
This container should be idempotent and apply all the remaining migrations (which means that the migrations should be registered in the database)
See https://github.com/huggingface/gitaly-internals/blob/main/kube/gitaly/templates/praefect/deployment.yaml#L85
| Apply the migrations to the mongodb databases: We can do it using a container launched as a [initContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
This container should be idempotent and apply all the remaining migrations (which means that the migrations should be registered in the database)
See https://github.com/huggingface/gitaly-internals/blob/main/kube/gitaly/templates/praefect/deployment.yaml#L85
| closed | 2022-05-06T12:58:07Z | 2022-09-16T17:44:54Z | 2022-09-16T17:44:54Z | severo |
1,227,766,866 | Launch a test suite on every Helm upgrade | See https://helm.sh/docs/topics/chart_tests/ | Launch a test suite on every Helm upgrade: See https://helm.sh/docs/topics/chart_tests/ | closed | 2022-05-06T11:52:02Z | 2022-09-16T17:45:00Z | 2022-09-16T17:45:00Z | severo |
1,227,766,283 | Use an ephemeral env on Kubernetes to run the e2e tests | Instead of relying on docker-compose, which might differ too much from the production environment
Requires #229
| Use an ephemeral env on Kubernetes to run the e2e tests: Instead of relying on docker-compose, which might differ too much from the production environment
Requires #229
| closed | 2022-05-06T11:51:21Z | 2022-09-16T17:45:07Z | 2022-09-16T17:45:07Z | severo |
1,227,765,303 | Create one ephemeral environment per Pull Request | It will allow to test every branch.
See how it's done in moonlanding:
- https://github.com/huggingface/moon-landing/blob/main/.github/workflows/docker.yml#L68: one domain per branch, calling Helm from GitHub action
- https://github.com/huggingface/moon-landing/blob/main/.github/workflows/ephemeral-clean.yml: remove the objects once the PR is merged
- https://github.com/huggingface/moon-landing/blob/main/.github/workflows/docker.yml#L71 : clean the ephemeral objects after two days | Create one ephemeral environment per Pull Request: It will allow to test every branch.
See how it's done in moonlanding:
- https://github.com/huggingface/moon-landing/blob/main/.github/workflows/docker.yml#L68: one domain per branch, calling Helm from GitHub action
- https://github.com/huggingface/moon-landing/blob/main/.github/workflows/ephemeral-clean.yml: remove the objects once the PR is merged
- https://github.com/huggingface/moon-landing/blob/main/.github/workflows/docker.yml#L71 : clean the ephemeral objects after two days | closed | 2022-05-06T11:50:18Z | 2022-09-16T17:45:18Z | 2022-09-16T17:45:18Z | severo |
1,226,313,664 | No data for facebook/winoground | https://huggingface.co/datasets/facebook/winoground
<img width="801" alt="Capture d’écran 2022-05-05 à 09 51 02" src="https://user-images.githubusercontent.com/1676121/166882123-e81c580a-d990-48e6-a522-8a852c274660.png">
See also https://github.com/huggingface/datasets/issues/4149 | No data for facebook/winoground: https://huggingface.co/datasets/facebook/winoground
<img width="801" alt="Capture d’écran 2022-05-05 à 09 51 02" src="https://user-images.githubusercontent.com/1676121/166882123-e81c580a-d990-48e6-a522-8a852c274660.png">
See also https://github.com/huggingface/datasets/issues/4149 | closed | 2022-05-05T07:51:25Z | 2022-06-17T15:01:07Z | 2022-06-17T15:01:07Z | severo |
1,224,256,017 | Use kubernetes | See #223
This first PR only installs the API in the Kubernetes cluster. Other PRs will install 1. the workers and 2. the nginx reverse proxy | Use kubernetes: See #223
This first PR only installs the API in the Kubernetes cluster. Other PRs will install 1. the workers and 2. the nginx reverse proxy | closed | 2022-05-03T15:31:36Z | 2022-05-09T12:17:28Z | 2022-05-09T07:30:55Z | severo |
1,224,225,249 | Generate and publish an OpenAPI (swagger) doc for the API | - [x] create the OpenAPI spec: https://github.com/huggingface/datasets-server/pull/424
- [x] publish the OpenAPI spec as a static file: https://github.com/huggingface/datasets-server/pull/426 - it's here: https://datasets-server.huggingface.co/openapi.json
- [x] render the OpenAPI spec as an HTML page: no, better just use the ReDoc demo for now: https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json
- [x] link to it from the datasets-server docs: https://huggingface.co/docs/datasets-server/api_reference | Generate and publish an OpenAPI (swagger) doc for the API: - [x] create the OpenAPI spec: https://github.com/huggingface/datasets-server/pull/424
- [x] publish the OpenAPI spec as a static file: https://github.com/huggingface/datasets-server/pull/426 - it's here: https://datasets-server.huggingface.co/openapi.json
- [x] render the OpenAPI spec as an HTML page: no, better just use the ReDoc demo for now: https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json
- [x] link to it from the datasets-server docs: https://huggingface.co/docs/datasets-server/api_reference | closed | 2022-05-03T15:05:12Z | 2023-08-10T23:39:36Z | 2023-08-10T23:39:36Z | severo |
1,224,224,672 | Send metrics to promotheus and show in Grafana | null | Send metrics to promotheus and show in Grafana: | closed | 2022-05-03T15:04:43Z | 2022-05-16T15:29:25Z | 2022-05-16T15:29:24Z | severo |
1,224,224,316 | Create a web administration console? | The alternative is to use:
- metrics / grafana to know the current state of the server
- scripts, or an authenticated API, to launch commands (warm the cache, stop the started jobs, etc) | Create a web administration console?: The alternative is to use:
- metrics / grafana to know the current state of the server
- scripts, or an authenticated API, to launch commands (warm the cache, stop the started jobs, etc) | closed | 2022-05-03T15:04:24Z | 2022-09-19T08:56:45Z | 2022-09-19T08:56:45Z | severo |
1,224,220,280 | Use a kubernetes infrastructure | As done for https://github.com/huggingface/tensorboard-launcher/ and https://github.com/huggingface/moon-landing gitaly before.
- [x] API: see https://github.com/huggingface/datasets-server/pull/227
- [x] workers: see #236
- [x] nginx reverse proxy: see #234
- [x] deploy to the prod cluster: see #249
- [x] respond to `datasets-server.huggingface.tech`: see #222
- [x] add a webhook from moonlanding to the new datasets-server URL: asked on [Slack](https://huggingface.slack.com/archives/C023JAKTR2P/p1652790074627839)
- [x] warm the cache on all the public datasets - see the state of the cache here: https://grafana.huggingface.tech/d/SaHl2KX7z/datasets-server?orgId=1
- [x] update moonlanding: see https://github.com/huggingface/moon-landing/pull/2957
- [x] update the moonlanding proxy: see https://github.com/huggingface/conf/pull/166
- [x] update autotrain: no need, autonlp-ui accesses the server through [a proxy on moonlanding (https://huggingface.co/proxy-datasets-preview)](https://github.com/huggingface/autonlp-ui/blob/12a7fd84cdbf9c30900e3740f6b2bbbe38ae3699/src/lib/config.ts#L57-L58)
- [x] remove the old machine
- [x] remove the `datasets-preview-backend.huggingface.tech` domain
- [x] setup monitoring: see #250 and #279
- [x] remove old webhook: asked on [Slack](https://huggingface.slack.com/archives/C023JAKTR2P/p1654005947927899)
| Use a kubernetes infrastructure: As done for https://github.com/huggingface/tensorboard-launcher/ and https://github.com/huggingface/moon-landing gitaly before.
- [x] API: see https://github.com/huggingface/datasets-server/pull/227
- [x] workers: see #236
- [x] nginx reverse proxy: see #234
- [x] deploy to the prod cluster: see #249
- [x] respond to `datasets-server.huggingface.tech`: see #222
- [x] add a webhook from moonlanding to the new datasets-server URL: asked on [Slack](https://huggingface.slack.com/archives/C023JAKTR2P/p1652790074627839)
- [x] warm the cache on all the public datasets - see the state of the cache here: https://grafana.huggingface.tech/d/SaHl2KX7z/datasets-server?orgId=1
- [x] update moonlanding: see https://github.com/huggingface/moon-landing/pull/2957
- [x] update the moonlanding proxy: see https://github.com/huggingface/conf/pull/166
- [x] update autotrain: no need, autonlp-ui accesses the server through [a proxy on moonlanding (https://huggingface.co/proxy-datasets-preview)](https://github.com/huggingface/autonlp-ui/blob/12a7fd84cdbf9c30900e3740f6b2bbbe38ae3699/src/lib/config.ts#L57-L58)
- [x] remove the old machine
- [x] remove the `datasets-preview-backend.huggingface.tech` domain
- [x] setup monitoring: see #250 and #279
- [x] remove old webhook: asked on [Slack](https://huggingface.slack.com/archives/C023JAKTR2P/p1654005947927899)
| closed | 2022-05-03T15:01:08Z | 2022-05-31T14:12:29Z | 2022-05-31T14:12:06Z | severo |
1,224,217,160 | Change domain name to datasets-server.huggingface.tech | The current domain is `datasets-preview.huggingface.tech`.
| Change domain name to datasets-server.huggingface.tech: The current domain is `datasets-preview.huggingface.tech`.
| closed | 2022-05-03T14:58:27Z | 2022-05-16T15:08:01Z | 2022-05-16T15:08:01Z | severo |
1,224,215,179 | Rename to datasets server | null | Rename to datasets server: | closed | 2022-05-03T14:56:49Z | 2022-05-03T15:13:08Z | 2022-05-03T15:13:07Z | severo |
1,223,980,706 | Send docker images to ecr | null | Send docker images to ecr: | closed | 2022-05-03T11:37:55Z | 2022-05-03T14:04:46Z | 2022-05-03T14:04:20Z | severo |
1,223,866,573 | Save the docker images to Amazon ECR | ECR = Amazon Elastic Container Registry
See https://github.com/huggingface/datasets-preview-backend/blob/dockerize/.github/workflows/docker.yml by @XciD
```yml
name: '[ALL] Build docker images'
on:
workflow_dispatch:
push:
env:
REGISTRY: 707930574880.dkr.ecr.us-east-1.amazonaws.com
REGION: us-east-1
jobs:
build-and-push-image:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Set outputs
id: vars
run: echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/hub-dataset-server
tags: |
type=raw,value=sha-${{ steps.vars.outputs.sha_short }}
- name: Build and push Docker image
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
build-args: COMMIT=${{ steps.vars.outputs.sha_short }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
``` | Save the docker images to Amazon ECR: ECR = Amazon Elastic Container Registry
See https://github.com/huggingface/datasets-preview-backend/blob/dockerize/.github/workflows/docker.yml by @XciD
```yml
name: '[ALL] Build docker images'
on:
workflow_dispatch:
push:
env:
REGISTRY: 707930574880.dkr.ecr.us-east-1.amazonaws.com
REGION: us-east-1
jobs:
build-and-push-image:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Set outputs
id: vars
run: echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/hub-dataset-server
tags: |
type=raw,value=sha-${{ steps.vars.outputs.sha_short }}
- name: Build and push Docker image
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
build-args: COMMIT=${{ steps.vars.outputs.sha_short }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
``` | closed | 2022-05-03T09:25:35Z | 2022-05-03T14:04:57Z | 2022-05-03T14:04:57Z | severo |
1,223,864,930 | Optimize the size of the docker images, and reduce the time of the build | Also: check security aspects
Maybe see https://github.com/python-poetry/poetry/discussions/1879
Also, the code by @XciD in https://github.com/huggingface/datasets-preview-backend/tree/dockerize
```dockerfile
FROM python:3.9.6-bullseye
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git-lfs \
&& rm -rf /var/lib/apt/lists/* \
&& git lfs install
WORKDIR /workspace/app
# Install poetry
RUN pip install --upgrade pip
RUN pip install poetry==1.1.12
# Copy the deps
COPY poetry.lock pyproject.toml ./
ADD vendors ./vendors
RUN poetry install --no-dev
# Copy the app
COPY ./src ./src
RUN poetry install
RUN useradd -m hf -u 1000
RUN chown hf:hf /workspace/app
ENV PATH /home/hf/.local/bin:$PATH
USER hf
# Launch the app
CMD ["poetry", "run", "python", "src/tensorboard_launcher/main.py"]
EXPOSE 8080
``` | Optimize the size of the docker images, and reduce the time of the build: Also: check security aspects
Maybe see https://github.com/python-poetry/poetry/discussions/1879
Also, the code by @XciD in https://github.com/huggingface/datasets-preview-backend/tree/dockerize
```dockerfile
FROM python:3.9.6-bullseye
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git-lfs \
&& rm -rf /var/lib/apt/lists/* \
&& git lfs install
WORKDIR /workspace/app
# Install poetry
RUN pip install --upgrade pip
RUN pip install poetry==1.1.12
# Copy the deps
COPY poetry.lock pyproject.toml ./
ADD vendors ./vendors
RUN poetry install --no-dev
# Copy the app
COPY ./src ./src
RUN poetry install
RUN useradd -m hf -u 1000
RUN chown hf:hf /workspace/app
ENV PATH /home/hf/.local/bin:$PATH
USER hf
# Launch the app
CMD ["poetry", "run", "python", "src/tensorboard_launcher/main.py"]
EXPOSE 8080
``` | closed | 2022-05-03T09:23:52Z | 2022-09-19T08:52:52Z | 2022-09-19T08:52:52Z | severo |
1,223,153,197 | Obscure error "config is not in the list" when the dataset is empty or stalled? | > Hey ! Not sure what the error means for [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) on the dataset preview
<img width="244" alt="image" src="https://user-images.githubusercontent.com/1676121/166296588-cad3f0b6-b424-498c-a8da-a23fb969380b.png">
See (private group) thread on slack: https://huggingface.slack.com/archives/C03E4V6RC5P/p1651511738246389
| Obscure error "config is not in the list" when the dataset is empty or stalled?: > Hey ! Not sure what the error means for [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) on the dataset preview
<img width="244" alt="image" src="https://user-images.githubusercontent.com/1676121/166296588-cad3f0b6-b424-498c-a8da-a23fb969380b.png">
See (private group) thread on slack: https://huggingface.slack.com/archives/C03E4V6RC5P/p1651511738246389
| closed | 2022-05-02T17:37:45Z | 2022-05-16T15:29:59Z | 2022-05-16T15:29:58Z | severo |
1,223,075,040 | Docker | null | Docker: | closed | 2022-05-02T16:16:20Z | 2022-05-03T09:21:25Z | 2022-05-03T09:21:24Z | severo |
1,222,840,693 | Standalone viewer | Idea by @thomasw21.
> Quick question, is there a way to use the dataset viewer locally without having to push to the hub? Seems like a pretty handy thing to use to look at a local dataset in `datasets` format, typically images/videos/audios
See chat on Slack: https://huggingface.slack.com/archives/C02V51Q3800/p1651494992227199 | Standalone viewer: Idea by @thomasw21.
> Quick question, is there a way to use the dataset viewer locally without having to push to the hub? Seems like a pretty handy thing to use to look at a local dataset in `datasets` format, typically images/videos/audios
See chat on Slack: https://huggingface.slack.com/archives/C02V51Q3800/p1651494992227199 | closed | 2022-05-02T12:42:28Z | 2022-09-16T19:59:03Z | 2022-09-16T19:59:03Z | severo |
1,222,831,268 | Dockerize | null | Dockerize: | closed | 2022-05-02T12:32:11Z | 2022-05-03T09:21:56Z | 2022-05-03T09:21:56Z | severo |
1,216,213,431 | Cache the result even if the request to the API has been canceled | https://huggingface.co/datasets/MLCommons/ml_spoken_words
<img width="1145" alt="Capture d’écran 2022-04-26 à 18 38 37" src="https://user-images.githubusercontent.com/1676121/165349982-0697ec8a-78df-4955-9a9a-2f9dd984dbe4.png">
See https://huggingface.slack.com/archives/C01GSG1QFPF/p1650989872783289, reported by @polinaeterna | Cache the result even if the request to the API has been canceled: https://huggingface.co/datasets/MLCommons/ml_spoken_words
<img width="1145" alt="Capture d’écran 2022-04-26 à 18 38 37" src="https://user-images.githubusercontent.com/1676121/165349982-0697ec8a-78df-4955-9a9a-2f9dd984dbe4.png">
See https://huggingface.slack.com/archives/C01GSG1QFPF/p1650989872783289, reported by @polinaeterna | closed | 2022-04-26T16:38:32Z | 2022-09-16T19:59:51Z | 2022-09-16T19:59:50Z | severo |
1,215,830,749 | split the code and move to a monorepo | Fixes #203 | split the code and move to a monorepo: Fixes #203 | closed | 2022-04-26T11:41:58Z | 2022-04-29T18:34:22Z | 2022-04-29T18:34:21Z | severo |
1,204,505,255 | 188 upgrade datasets | null | 188 upgrade datasets: | closed | 2022-04-14T13:05:11Z | 2022-04-14T13:21:17Z | 2022-04-14T13:21:16Z | severo |
1,204,441,597 | Skip local configs in mixed local+downloadable datasets | Some benchmark-type datasets can have non-downloadable configs, e.g. when one of its subsets is a closed-license dataset. One of such datasets is BABEL, as part of a larger benchmark collection XTREME-S: https://huggingface.co/datasets/google/xtreme_s
Here the preview backend is trying to load the `babel.<lang>` configs that require local files:
```
Status code: 400
Exception: FileNotFoundError
Message: You are trying to load the 'babel.as' speech recognition dataset. It is required that you manually download the input speech data. Manual download instructions: Please make sure to get access and download the following dataset LDC2016S06 from https://catalog.ldc.upenn.edu/LDC2016S06.
Once downloaded make sure that you pass the path to the downloaded file IARPA_BABEL_OP1_102_LDC2016S06.zip as a manual downloaded dataset:
`load_dataset("google/xtreme-s", "babel.as", data_dir='path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip')`.
```
**Expected**:
Skip the `babel.<lang>` configs as they require `data_dir`, and load the rest of the configurations (`fleurs.<lang>`, `mls.<lang>`, `minds14.<lang>`...) | Skip local configs in mixed local+downloadable datasets: Some benchmark-type datasets can have non-downloadable configs, e.g. when one of its subsets is a closed-license dataset. One of such datasets is BABEL, as part of a larger benchmark collection XTREME-S: https://huggingface.co/datasets/google/xtreme_s
Here the preview backend is trying to load the `babel.<lang>` configs that require local files:
```
Status code: 400
Exception: FileNotFoundError
Message: You are trying to load the 'babel.as' speech recognition dataset. It is required that you manually download the input speech data. Manual download instructions: Please make sure to get access and download the following dataset LDC2016S06 from https://catalog.ldc.upenn.edu/LDC2016S06.
Once downloaded make sure that you pass the path to the downloaded file IARPA_BABEL_OP1_102_LDC2016S06.zip as a manual downloaded dataset:
`load_dataset("google/xtreme-s", "babel.as", data_dir='path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip')`.
```
**Expected**:
Skip the `babel.<lang>` configs as they require `data_dir`, and load the rest of the configurations (`fleurs.<lang>`, `mls.<lang>`, `minds14.<lang>`...) | closed | 2022-04-14T12:02:20Z | 2023-03-28T09:19:28Z | 2023-03-28T09:19:27Z | anton-l |
1,201,624,182 | fix: 🐛 allow streaming=False in get_rows | it fixes #206. | fix: 🐛 allow streaming=False in get_rows: it fixes #206. | closed | 2022-04-12T10:26:21Z | 2022-04-12T11:44:24Z | 2022-04-12T11:44:23Z | severo |
1,201,622,590 | regression: fallback if streaming fails is disabled | Causes https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode. | regression: fallback if streaming fails is disabled: Causes https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode. | closed | 2022-04-12T10:25:21Z | 2022-04-12T11:44:23Z | 2022-04-12T11:44:23Z | severo |
1,200,462,131 | Add indexes to the mongo databases? | The database is somewhat big, with 1664598 elements in the `rows` collection.
Note: the `rows` collection will disappear in https://github.com/huggingface/datasets-preview-backend/pull/202, but still: we should keep an eye on the sizes and the performance. | Add indexes to the mongo databases?: The database is somewhat big, with 1664598 elements in the `rows` collection.
Note: the `rows` collection will disappear in https://github.com/huggingface/datasets-preview-backend/pull/202, but still: we should keep an eye on the sizes and the performance. | closed | 2022-04-11T19:53:46Z | 2022-05-23T12:57:09Z | 2022-05-23T12:57:09Z | severo |
1,197,473,090 | Reduce the size of the endpoint responses? | Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for several configs or splits at the same time.
Changing the format would require changing the moon-landing client. | Reduce the size of the endpoint responses?: Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for several configs or splits at the same time.
Changing the format would require changing the moon-landing client. | closed | 2022-04-08T15:31:35Z | 2022-08-24T18:03:38Z | 2022-08-24T18:03:38Z | severo |
1,197,469,898 | Refactor the code | - `models/` use classes instead of functions
- split the code more clearly between the app and the worker (in particular: two config files). Possibly, three directories: common or database, worker (or dataset_worker, split_worker), app. It will make it easier to start https://github.com/huggingface/datasets-server/ | Refactor the code: - `models/` use classes instead of functions
- split the code more clearly between the app and the worker (in particular: two config files). Possibly, three directories: common or database, worker (or dataset_worker, split_worker), app. It will make it easier to start https://github.com/huggingface/datasets-server/ | closed | 2022-04-08T15:28:36Z | 2022-04-29T18:34:21Z | 2022-04-29T18:34:21Z | severo |
1,197,461,728 | Simplify cache by dropping two collections | instead of keeping a large collection of rows and columns, then compute
the response on every endpoint call, possibly truncating the response,
we now pre-compute the response and store it in the cache. We lose the
ability to get the original data, but we don't need it. It fixes https://github.com/huggingface/datasets-preview-backend/issues/197.
See
https://github.com/huggingface/datasets-preview-backend/issues/197#issuecomment-1092700076.
BREAKING CHANGE: 🧨 the cache database structure has been modified. Run
20220408_cache_remove_dbrow_dbcolumn.py to migrate the database. | Simplify cache by dropping two collections: instead of keeping a large collection of rows and columns, then compute
the response on every endpoint call, possibly truncating the response,
we now pre-compute the response and store it in the cache. We lose the
ability to get the original data, but we don't need it. It fixes https://github.com/huggingface/datasets-preview-backend/issues/197.
See
https://github.com/huggingface/datasets-preview-backend/issues/197#issuecomment-1092700076.
BREAKING CHANGE: 🧨 the cache database structure has been modified. Run
20220408_cache_remove_dbrow_dbcolumn.py to migrate the database. | closed | 2022-04-08T15:21:23Z | 2022-04-12T08:15:55Z | 2022-04-12T08:15:54Z | severo |
1,194,824,190 | [BREAKING] fix: 🐛 quick fix to avoid mongodb errors with big rows | if a row is too big to be inserted in the cache database, we just store
the empty string for each of its cells, and mark it as erroneous. All
the cells are marked as truncated in the /rows endpoints. See
https://github.com/huggingface/datasets-preview-backend/issues/197.
This commit also contains the first migration script, with instructions
to apply and write them, fixing #200
BREAKING: the cache database structure has changed. Apply the migration
script: 20220406_cache_dbrow_status_and_since.py | [BREAKING] fix: 🐛 quick fix to avoid mongodb errors with big rows: if a row is too big to be inserted in the cache database, we just store
the empty string for each of its cells, and mark it as erroneous. All
the cells are marked as truncated in the /rows endpoints. See
https://github.com/huggingface/datasets-preview-backend/issues/197.
This commit also contains the first migration script, with instructions
to apply and write them, fixing #200
BREAKING: the cache database structure has changed. Apply the migration
script: 20220406_cache_dbrow_status_and_since.py | closed | 2022-04-06T16:09:56Z | 2022-04-07T08:11:14Z | 2022-04-07T08:11:14Z | severo |
1,194,268,131 | Setup a "migrations" mechanism to upgrade/downgrade the databases | It would allow migrating the data when the structure of the database (queue or cache) changes. Until now, we just emptied the data and recomputed it every time. | Setup a "migrations" mechanism to upgrade/downgrade the databases: It would allow migrating the data when the structure of the database (queue or cache) changes. Until now, we just emptied the data and recomputed it every time. | closed | 2022-04-06T08:35:54Z | 2022-04-07T08:11:51Z | 2022-04-07T08:11:50Z | severo |
1,193,524,491 | Replace real world datasets with fake in the tests | It would be better for unit tests to remove the dependency on external datasets, and use fake datasets instead.
For e2e tests, we could have both:
- fake datasets, but hosted on the hub
- real datasets | Replace real world datasets with fake in the tests: It would be better for unit tests to remove the dependency on external datasets, and use fake datasets instead.
For e2e tests, we could have both:
- fake datasets, but hosted on the hub
- real datasets | closed | 2022-04-05T17:47:15Z | 2022-08-24T16:26:28Z | 2022-08-24T16:26:28Z | severo |
1,192,897,333 | Fix detection of pending jobs | It allows showing a message like `The dataset is being processed. Retry later.` or `The split is being processed. Retry later.` | Fix detection of pending jobs: It allows showing a message like `The dataset is being processed. Retry later.` or `The split is being processed. Retry later.` | closed | 2022-04-05T09:49:47Z | 2022-04-05T17:17:28Z | 2022-04-05T17:17:27Z | severo |
1,192,862,488 | Row too big to be stored in cache | Seen with https://huggingface.co/datasets/elena-soare/crawled-ecommerce
at least one row is too big (~90MB) to be stored in the MongoDB cache:
```
pymongo.errors.DocumentTooLarge: BSON document too large (90954494 bytes) - the connected server supports BSON document sizes up to 16793598 bytes.
```
We should try/except the code here with the `pymongo.errors.DocumentTooLarge` exception:
https://github.com/huggingface/datasets-preview-backend/blob/a77eecca77e2f059ad246aad26fbcb5299ec0a2a/src/datasets_preview_backend/io/cache.py#L320-L326 | Row too big to be stored in cache: Seen with https://huggingface.co/datasets/elena-soare/crawled-ecommerce
at least one row is too big (~90MB) to be stored in the MongoDB cache:
```
pymongo.errors.DocumentTooLarge: BSON document too large (90954494 bytes) - the connected server supports BSON document sizes up to 16793598 bytes.
```
We should try/except the code here with the `pymongo.errors.DocumentTooLarge` exception:
https://github.com/huggingface/datasets-preview-backend/blob/a77eecca77e2f059ad246aad26fbcb5299ec0a2a/src/datasets_preview_backend/io/cache.py#L320-L326 | closed | 2022-04-05T09:19:36Z | 2022-04-12T09:24:07Z | 2022-04-12T08:15:54Z | severo |
1,192,726,002 | Error: "DbSplit matching query does not exist" | https://github.com/huggingface/datasets/issues/4093
https://huggingface.co/datasets/elena-soare/crawled-ecommerce
```
Server error
Status code: 400
Exception: DoesNotExist
Message: DbSplit matching query does not exist.
```
Introduced by #193? _edit:_ no | Error: "DbSplit matching query does not exist": https://github.com/huggingface/datasets/issues/4093
https://huggingface.co/datasets/elena-soare/crawled-ecommerce
```
Server error
Status code: 400
Exception: DoesNotExist
Message: DbSplit matching query does not exist.
```
Introduced by #193? _edit:_ no | closed | 2022-04-05T07:13:32Z | 2022-04-07T08:16:07Z | 2022-04-07T08:15:43Z | severo |
1,192,089,374 | feat: 🎸 install libsndfile 1.0.30 and support opus files | fixes https://github.com/huggingface/datasets-preview-backend/issues/194 | feat: 🎸 install libsndfile 1.0.30 and support opus files: fixes https://github.com/huggingface/datasets-preview-backend/issues/194 | closed | 2022-04-04T17:23:26Z | 2022-04-07T10:29:10Z | 2022-04-04T17:41:37Z | severo |
1,192,056,493 | support opus files | https://huggingface.co/datasets/polinaeterna/ml_spoken_words
```
Message: Decoding .opus files requires 'libsndfile'>=1.0.30, it can be installed via conda: `conda install -c conda-forge libsndfile>=1.0.30`
```
The current version of libsndfile on Ubuntu stable (20.04) is 1.0.28, and we need version 1.0.30.
Various ways to get it:
- build it
- add a repo to a further version of ubuntu, and use apt pinning to install it
- wait for https://releases.ubuntu.com/jammy/ and upgrade the whole OS to the next stable release
| support opus files: https://huggingface.co/datasets/polinaeterna/ml_spoken_words
```
Message: Decoding .opus files requires 'libsndfile'>=1.0.30, it can be installed via conda: `conda install -c conda-forge libsndfile>=1.0.30`
```
The current version of libsndfile on Ubuntu stable (20.04) is 1.0.28, and we need version 1.0.30.
Various ways to get it:
- build it
- add a repo to a further version of ubuntu, and use apt pinning to install it
- wait for https://releases.ubuntu.com/jammy/ and upgrade the whole OS to the next stable release
| closed | 2022-04-04T16:51:06Z | 2022-04-04T17:41:37Z | 2022-04-04T17:41:37Z | severo |
1,191,999,920 | give reason in error if dataset/split cache is refreshing | fixes #186 | give reason in error if dataset/split cache is refreshing: fixes #186 | closed | 2022-04-04T15:59:48Z | 2022-04-04T16:17:17Z | 2022-04-04T16:17:16Z | severo |
1,191,841,017 | test: 💍 re-enable tests for temporarily disabled datasets | And: disable tests on common_voice | test: 💍 re-enable tests for temporarily disabled datasets: And: disable tests on common_voice | closed | 2022-04-04T14:01:31Z | 2022-04-04T14:24:40Z | 2022-04-04T14:24:39Z | severo |
1,191,519,057 | Error with RGBA images | https://huggingface.co/datasets/huggan/few-shot-skulls
```
Status code: 500
Exception: Status500Error
Message: cannot write mode RGBA as JPEG
```
reported by @NielsRogge
| Error with RGBA images: https://huggingface.co/datasets/huggan/few-shot-skulls
```
Status code: 500
Exception: Status500Error
Message: cannot write mode RGBA as JPEG
```
reported by @NielsRogge
| closed | 2022-04-04T09:38:52Z | 2022-06-21T16:46:11Z | 2022-06-21T16:24:53Z | severo |
1,187,825,497 | Cache is not refreshed when a dataset is moved (renamed)? | See https://huggingface.slack.com/archives/C01BWJU0YKW/p1648720173531679?thread_ts=1648719150.059249&cid=C01BWJU0YKW
The dataset https://huggingface.co/datasets/huggan/horse2zebra-aligned has been renamed https://huggingface.co/datasets/huggan/horse2zebra (on 2022/03/31). The [preview](https://huggingface.co/datasets/huggan/horse2zebra/viewer) shows the following error:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
and the dataset is not in the refresh queue (https://huggingface.co/proxy-datasets-preview/queue-dump-waiting-started), which means that it will not be fixed alone.
reported by @NielsRogge
| Cache is not refreshed when a dataset is moved (renamed)?: See https://huggingface.slack.com/archives/C01BWJU0YKW/p1648720173531679?thread_ts=1648719150.059249&cid=C01BWJU0YKW
The dataset https://huggingface.co/datasets/huggan/horse2zebra-aligned has been renamed https://huggingface.co/datasets/huggan/horse2zebra (on 2022/03/31). The [preview](https://huggingface.co/datasets/huggan/horse2zebra/viewer) shows the following error:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
and the dataset is not in the refresh queue (https://huggingface.co/proxy-datasets-preview/queue-dump-waiting-started), which means that it will not be fixed alone.
reported by @NielsRogge
| closed | 2022-03-31T09:56:01Z | 2023-09-25T12:11:35Z | 2022-09-19T09:38:40Z | severo |
1,186,577,966 | remove "gated datasets unlock" logic | the new tokens are not related to a user and have read access to the gated datasets (right, @SBrandeis?).
also: add two tests to ensure the gated datasets can be accessed | remove "gated datasets unlock" logic: the new tokens are not related to a user and have read access to the gated datasets (right, @SBrandeis?).
also: add two tests to ensure the gated datasets can be accessed | closed | 2022-03-30T14:48:39Z | 2022-04-01T16:39:24Z | 2022-04-01T16:39:23Z | severo |
1,224,197,146 | Access images through an IIIF Image API | > The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
>
> ```
> {scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}
> ```
>
> A concrete example of this:
>
> ```
> https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg
> ```
See https://github.com/huggingface/datasets/issues/4041 | Access images through an IIIF Image API: > The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
>
> ```
> {scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}
> ```
>
> A concrete example of this:
>
> ```
> https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg
> ```
See https://github.com/huggingface/datasets/issues/4041 | closed | 2022-03-30T12:59:02Z | 2022-09-16T20:03:54Z | 2022-09-16T20:03:54Z | severo |
1,180,823,959 | Upgrade datasets to fix the issue with common_voice | See https://github.com/huggingface/datasets-preview-backend/runs/5690421290?check_suite_focus=true
Upgrading to the next release (next week) should fix the CI and the common_voice dataset viewer | Upgrade datasets to fix the issue with common_voice: See https://github.com/huggingface/datasets-preview-backend/runs/5690421290?check_suite_focus=true
Upgrading to the next release (next week) should fix the CI and the common_voice dataset viewer | closed | 2022-03-25T13:54:24Z | 2022-04-14T13:21:56Z | 2022-04-14T13:21:55Z | severo |
1,180,613,607 | Update blocked datasets | null | Update blocked datasets: | closed | 2022-03-25T10:28:42Z | 2022-03-25T13:55:00Z | 2022-03-25T13:54:59Z | severo |
1,224,197,612 | Avoid being blocked by Google (and other providers) | See https://github.com/huggingface/datasets/issues/4005#issuecomment-1077897284 or https://github.com/huggingface/datasets/pull/3979#issuecomment-1077838956 | Avoid being blocked by Google (and other providers): See https://github.com/huggingface/datasets/issues/4005#issuecomment-1077897284 or https://github.com/huggingface/datasets/pull/3979#issuecomment-1077838956 | closed | 2022-03-25T10:17:30Z | 2022-06-17T12:55:22Z | 2022-06-17T12:55:22Z | severo |
1,179,775,656 | Show a better error message when the cache is refreshing | For example, in https://github.com/huggingface/datasets-preview-backend/issues/185, the message was "No data" which is misleading | Show a better error message when the cache is refreshing: For example, in https://github.com/huggingface/datasets-preview-backend/issues/185, the message was "No data" which is misleading | closed | 2022-03-24T16:49:38Z | 2022-04-12T09:43:28Z | 2022-04-12T09:43:28Z | severo |
1,175,617,103 | No data on nielsr/CelebA-faces | https://huggingface.co/datasets/nielsr/CelebA-faces
<img width="1056" alt="Capture d’écran 2022-03-21 à 17 12 55" src="https://user-images.githubusercontent.com/1676121/159303539-b3495de4-4308-477a-a1bd-fb65ee598933.png">
reported by @NielsRogge
Possibly because the dataset is still in the job queue (https://datasets-preview.huggingface.tech/queue - https://huggingface.co/proxy-datasets-preview/queue-dump-waiting-started).
We should improve the error message in that case | No data on nielsr/CelebA-faces: https://huggingface.co/datasets/nielsr/CelebA-faces
<img width="1056" alt="Capture d’écran 2022-03-21 à 17 12 55" src="https://user-images.githubusercontent.com/1676121/159303539-b3495de4-4308-477a-a1bd-fb65ee598933.png">
reported by @NielsRogge
Possibly because the dataset is still in the job queue (https://datasets-preview.huggingface.tech/queue - https://huggingface.co/proxy-datasets-preview/queue-dump-waiting-started).
We should improve the error message in that case | closed | 2022-03-21T16:14:16Z | 2022-03-24T16:48:38Z | 2022-03-24T16:48:37Z | severo |
1,175,518,304 | Show the images in cgarciae/cartoonset | https://huggingface.co/datasets/cgarciae/cartoonset
<img width="759" alt="Capture d’écran 2022-03-21 à 16 03 40" src="https://user-images.githubusercontent.com/1676121/159289733-5c8d0bd9-15a5-472e-8446-1f16f7d6eaf0.png">
reported by @thomwolf at https://huggingface.slack.com/archives/C02V51Q3800/p1647874958192989 | Show the images in cgarciae/cartoonset: https://huggingface.co/datasets/cgarciae/cartoonset
<img width="759" alt="Capture d’écran 2022-03-21 à 16 03 40" src="https://user-images.githubusercontent.com/1676121/159289733-5c8d0bd9-15a5-472e-8446-1f16f7d6eaf0.png">
reported by @thomwolf at https://huggingface.slack.com/archives/C02V51Q3800/p1647874958192989 | closed | 2022-03-21T15:04:08Z | 2022-03-24T16:47:21Z | 2022-03-24T16:47:20Z | severo |
1,172,248,584 | Issue with access to API for gated datasets? | https://github.com/huggingface/datasets/issues/3954
https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1
```
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1?full=true
``` | Issue with access to API for gated datasets?: https://github.com/huggingface/datasets/issues/3954
https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1
```
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1?full=true
``` | closed | 2022-03-17T11:18:52Z | 2022-04-04T16:24:29Z | 2022-04-04T16:24:29Z | severo |
1,170,866,049 | feat: 🎸 upgrade to datasets 2.0.0 | fixes #181 | feat: 🎸 upgrade to datasets 2.0.0: fixes #181 | closed | 2022-03-16T11:03:04Z | 2022-03-16T11:10:24Z | 2022-03-16T11:10:23Z | severo |
1,170,814,904 | Upgrade datasets to 2.0.0 | https://github.com/huggingface/datasets/releases/tag/2.0.0
In particular, it should fix up to 50 datasets hosted on Google Drive thanks to https://github.com/huggingface/datasets/pull/3843 | Upgrade datasets to 2.0.0: https://github.com/huggingface/datasets/releases/tag/2.0.0
In particular, it should fix up to 50 datasets hosted on Google Drive thanks to https://github.com/huggingface/datasets/pull/3843 | closed | 2022-03-16T10:20:49Z | 2022-03-16T11:10:23Z | 2022-03-16T11:10:23Z | severo |
1,170,804,202 | Issue when transferring a dataset from a user to an org? | https://huggingface.slack.com/archives/C01229B19EX/p1647372809022069
> When transferring a dataset from a user to an org, the datasets viewer no longer worked. What’s the issue here?
reported by @ktangri
| Issue when transferring a dataset from a user to an org?: https://huggingface.slack.com/archives/C01229B19EX/p1647372809022069
> When transferring a dataset from a user to an org, the datasets viewer no longer worked. What’s the issue here?
reported by @ktangri
| closed | 2022-03-16T10:11:28Z | 2022-03-21T08:41:34Z | 2022-03-21T08:41:34Z | severo |
1,168,468,181 | feat: 🎸 revert double limit on the rows size (reverts #162) | null | feat: 🎸 revert double limit on the rows size (reverts #162): | closed | 2022-03-14T14:32:50Z | 2022-03-14T14:32:56Z | 2022-03-14T14:32:55Z | severo |
1,168,433,474 | feat: 🎸 truncate cell contents instead of removing rows | Add a ROWS_MIN_NUMBER environment variable, which defines how many rows
should be returned as a minimum. If the size of these rows is greater
than the ROWS_MAX_BYTES limit, then the cells themselves are truncated
(transformed to strings, then truncated to 100 bytes which is an
hardcoded limit). In that case, the new field "truncated_cells" contain
the list of cells (column names) that are truncated.
BREAKING CHANGE: 🧨 The /rows response format has changed | feat: 🎸 truncate cell contents instead of removing rows: Add a ROWS_MIN_NUMBER environment variable, which defines how many rows
should be returned as a minimum. If the size of these rows is greater
than the ROWS_MAX_BYTES limit, then the cells themselves are truncated
(transformed to strings, then truncated to 100 bytes which is an
hardcoded limit). In that case, the new field "truncated_cells" contain
the list of cells (column names) that are truncated.
BREAKING CHANGE: 🧨 The /rows response format has changed | closed | 2022-03-14T14:09:33Z | 2022-03-14T14:13:38Z | 2022-03-14T14:13:37Z | severo |
1,166,626,189 | No data for cnn_dailymail | https://huggingface.co/datasets/cnn_dailymail returns "No data" | No data for cnn_dailymail: https://huggingface.co/datasets/cnn_dailymail returns "No data" | closed | 2022-03-11T16:36:16Z | 2022-04-25T11:41:49Z | 2022-04-25T11:41:49Z | severo |
1,165,265,890 | Truncate the row cells to a maximum size | As the purpose of this backend is to serve 100 rows of every dataset split to moonlanding, to show them inside a table, we can optimize a bit for that purpose.
In particular, when the 100 rows are really big, the browsers have a hard time rendering the table, in particular Safari. Also: this can generate a lot of traffic, just to show a table that nobody will inspect in detail.
So: the proposal here is to limit the size of every cell returned in the `/rows` endpoint. The size can be specified as the number of bytes of the generated JSON. We will also add a flag to tell if a cell has been truncated.
To keep it simple, we will make it mandatory, but a further version might make it optional through a URL parameter or another endpoint.
- [ ] check if it fixes https://github.com/huggingface/datasets-preview-backend/issues/177 | Truncate the row cells to a maximum size: As the purpose of this backend is to serve 100 rows of every dataset split to moonlanding, to show them inside a table, we can optimize a bit for that purpose.
In particular, when the 100 rows are really big, the browsers have a hard time rendering the table, in particular Safari. Also: this can generate a lot of traffic, just to show a table that nobody will inspect in detail.
So: the proposal here is to limit the size of every cell returned in the `/rows` endpoint. The size can be specified as the number of bytes of the generated JSON. We will also add a flag to tell if a cell has been truncated.
To keep it simple, we will make it mandatory, but a further version might make it optional through a URL parameter or another endpoint.
- [ ] check if it fixes https://github.com/huggingface/datasets-preview-backend/issues/177 | closed | 2022-03-10T14:01:53Z | 2022-03-14T15:33:42Z | 2022-03-14T15:33:41Z | severo |
1,161,755,421 | Fix ci | null | Fix ci: | closed | 2022-03-07T18:08:48Z | 2022-03-07T20:15:48Z | 2022-03-07T20:15:47Z | severo |
1,161,698,359 | feat: 🎸 upgrade datasets to 1.18.4 | see https://github.com/huggingface/datasets/releases/tag/1.18.4 | feat: 🎸 upgrade datasets to 1.18.4: see https://github.com/huggingface/datasets/releases/tag/1.18.4 | closed | 2022-03-07T17:16:26Z | 2022-03-07T17:26:20Z | 2022-03-07T17:16:30Z | severo |
1,159,490,859 | Issue with mongo | https://huggingface.co/datasets/circa
```
Message: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_preview_cache.rows index: dataset_name_1_config_name_1_split_name_1_row_idx_1 dup key: { dataset_name: "circa", config_name: "default", split_name: "train", row_idx: 0 }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1, 'config_name': 1, 'split_name': 1, 'row_idx': 1}, 'keyValue': {'dataset_name': 'circa', 'config_name': 'default', 'split_name': 'train', 'row_idx': 0}, 'errmsg': 'E11000 duplicate key error collection: datasets_preview_cache.rows index: dataset_name_1_config_name_1_split_name_1_row_idx_1 dup key: { dataset_name: "circa", config_name: "default", split_name: "train", row_idx: 0 }'})
```
| Issue with mongo: https://huggingface.co/datasets/circa
```
Message: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_preview_cache.rows index: dataset_name_1_config_name_1_split_name_1_row_idx_1 dup key: { dataset_name: "circa", config_name: "default", split_name: "train", row_idx: 0 }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1, 'config_name': 1, 'split_name': 1, 'row_idx': 1}, 'keyValue': {'dataset_name': 'circa', 'config_name': 'default', 'split_name': 'train', 'row_idx': 0}, 'errmsg': 'E11000 duplicate key error collection: datasets_preview_cache.rows index: dataset_name_1_config_name_1_split_name_1_row_idx_1 dup key: { dataset_name: "circa", config_name: "default", split_name: "train", row_idx: 0 }'})
```
| closed | 2022-03-04T10:29:44Z | 2022-03-07T09:23:55Z | 2022-03-07T09:23:55Z | severo |
1,159,457,524 | Support segmented images | Issue proposed by @NielsRogge
eg https://huggingface.co/datasets/scene_parse_150
<img width="785" alt="Capture d’écran 2022-03-04 à 10 56 05" src="https://user-images.githubusercontent.com/1676121/156741519-fbae6844-2606-4c28-837e-279d83d00865.png">
Every pixel in the images of the `annotation` column has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color.
So we might want to render the image as a colored image instead of a black and white one.
cc @mariosasko: how could I get the color palette information after `load_dataset()`? Is it managed by the `datasets` library?
| Support segmented images: Issue proposed by @NielsRogge
eg https://huggingface.co/datasets/scene_parse_150
<img width="785" alt="Capture d’écran 2022-03-04 à 10 56 05" src="https://user-images.githubusercontent.com/1676121/156741519-fbae6844-2606-4c28-837e-279d83d00865.png">
Every pixel in the images of the `annotation` column has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color.
So we might want to render the image as a colored image instead of a black and white one.
cc @mariosasko: how could I get the color palette information after `load_dataset()`? Is it managed by the `datasets` library?
| open | 2022-03-04T09:55:50Z | 2024-06-19T14:00:42Z | null | severo |
1,154,130,333 | Warm the nginx cache after every dataset update? | Currently, when the list of splits, or the list of rows of a split, are updated, only the cache (MongoDB) is updated. But the cache in nginx still a) has the old value until it expires, or b) is still empty until the endpoint is requested for that dataset.
Case a) is not very important since the cache expiration is 2 minutes.
Case b) is more crucial because for large responses (https://datasets-preview.huggingface.tech/rows?dataset=edbeeching%2Fdecision_transformer_gym_replay&config=halfcheetah-expert-v2&split=train for example), the creation of the JSON response can take so long that the client timeouts (in moon-landing, the client timeouts at 1.5s, while the response might take a bit longer, 3-4s)
| Warm the nginx cache after every dataset update?: Currently, when the list of splits, or the list of rows of a split, are updated, only the cache (MongoDB) is updated. But the cache in nginx still a) has the old value until it expires, or b) is still empty until the endpoint is requested for that dataset.
Case a) is not very important since the cache expiration is 2 minutes.
Case b) is more crucial because for large responses (https://datasets-preview.huggingface.tech/rows?dataset=edbeeching%2Fdecision_transformer_gym_replay&config=halfcheetah-expert-v2&split=train for example), the creation of the JSON response can take so long that the client timeouts (in moon-landing, the client timeouts at 1.5s, while the response might take a bit longer, 3-4s)
| closed | 2022-02-28T14:03:08Z | 2022-06-17T12:52:15Z | 2022-06-17T12:52:15Z | severo |
1,150,603,977 | feat: 🎸 hide expected errors from the worker logs | null | feat: 🎸 hide expected errors from the worker logs: | closed | 2022-02-25T15:52:25Z | 2022-02-25T15:52:31Z | 2022-02-25T15:52:31Z | severo |
1,150,593,087 | fix: 🐛 force job finishing in any case | null | fix: 🐛 force job finishing in any case: | closed | 2022-02-25T15:40:31Z | 2022-02-25T15:41:51Z | 2022-02-25T15:41:49Z | severo |
1,150,293,567 | STARTED jobs with a non empty "finished_at" field | Some jobs cannot be finished correctly and get stuck in the STARTED status (see https://github.com/huggingface/datasets-preview-backend/blob/9e360f9de0df91be28001587964c1af25f82d051/src/datasets_preview_backend/io/queue.py#L253)
```
36|worker-splits-A | DEBUG: 2022-02-25 10:32:44,579 - datasets_preview_backend.io.cache - split 'validation' from dataset 'katanaml/cord' in config 'cord' had error, cache updated
36|worker-splits-A | WARNING: 2022-02-25 10:32:44,593 - datasets_preview_backend.io.queue - started job SplitJob[katanaml/cord, cord, validation] has a non-empty finished_at field. Aborting.
```
<img width="387" alt="Capture d’écran 2022-02-25 à 11 33 40" src="https://user-images.githubusercontent.com/1676121/155700317-b6f6813f-9260-44b6-878c-20debbe5444b.png">
| STARTED jobs with a non empty "finished_at" field: Some jobs cannot be finished correctly and get stuck in the STARTED status (see https://github.com/huggingface/datasets-preview-backend/blob/9e360f9de0df91be28001587964c1af25f82d051/src/datasets_preview_backend/io/queue.py#L253)
```
36|worker-splits-A | DEBUG: 2022-02-25 10:32:44,579 - datasets_preview_backend.io.cache - split 'validation' from dataset 'katanaml/cord' in config 'cord' had error, cache updated
36|worker-splits-A | WARNING: 2022-02-25 10:32:44,593 - datasets_preview_backend.io.queue - started job SplitJob[katanaml/cord, cord, validation] has a non-empty finished_at field. Aborting.
```
<img width="387" alt="Capture d’écran 2022-02-25 à 11 33 40" src="https://user-images.githubusercontent.com/1676121/155700317-b6f6813f-9260-44b6-878c-20debbe5444b.png">
| closed | 2022-02-25T10:35:42Z | 2022-02-25T15:56:28Z | 2022-02-25T15:56:28Z | severo |
1,150,281,750 | fix: 🐛 fix CI | null | fix: 🐛 fix CI: | closed | 2022-02-25T10:23:27Z | 2022-02-25T10:23:34Z | 2022-02-25T10:23:33Z | severo |