id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
⌀ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,317,182,232 | Take decisions before launching in public | ## Version
Should we integrate a version in the path or domain, to help with future breaking changes?
Three options:
1. domain based: https://v1.datasets-server.huggingface.co
2. path based: https://datasets-server.huggingface.co/v1/
3. no version (current): https://datasets-server.huggingface.co
I think 3 is OK. Not having a version means we have to try to make everything backward-compatible, which is not a bad idea. If it's really needed, we can switch to 1 or 2 afterward. Also: having a version means that if we do breaking changes, we should maintain at least two versions in parallel...
## Envelop
A common pattern is to always return a JSON object with `data` or `error`. This way, we know that we can always consume the API with:
```js
const {data, error} = fetch(...)
```
and test for the existence of data, or error. Otherwise, every endpoint might have different behavior. Also: it's useful to have the envelop when looking at the response without knowing the HTTP status code (eg: in our cache)
Options:
1. no envelop (current): the client must rely on the HTTP status code to get the type of response (error or OK)
2. envelop: we need to migrate all the endpoints, to add an intermediate "data" or "error" field.
## HTTP status codes
We currently only use 200, 400, and 500 for simplicity. We might want to return alternative status codes such as 404 (not found), or 401/403 (when we will protect some endpoints).
Options:
1. only use 200, 400, 500 (current)
2. add more status codes, like 404, 401, 403
I think it's OK to stay with 300, 400, and 500, and let the client use the details of the response to figure out what failed.
## Error codes
Currently, the errors have a "message" field, and optionally three more fields: "cause_exception", "cause_message" and "cause_traceback". We could add a "code" field, such as "NOT_STREAMABLE", to make it more reliable for the client to implement logic based on the type of error (indeed: the message is a long string that might be updated later. A short code should be more reliable). Also: having an error code could counterbalance the lack of detailed HTTP status codes (see the previous point).
Internally, having codes could help indirect the messages to a dictionary, and it would help to catalog all the possible types of errors in the same place.
Options:
1. no "code" field (current)
2. add a "code" field, such as "NOT_STREAMABLE"
I'm in favor of adding such a short code.
## Case
The endpoints with several words are currently using "spinal-case", eg "/first-rows". An alternative is to use "snake_case", eg "/first_rows". Nothing important here.
Options:
1. "/spinal-case" (current)
2. "/snake_case"
I think it's not important, we can keep with spinal-case, and it's coherent with Hub API: https://huggingface.co/docs/hub/api
| Take decisions before launching in public: ## Version
Should we integrate a version in the path or domain, to help with future breaking changes?
Three options:
1. domain based: https://v1.datasets-server.huggingface.co
2. path based: https://datasets-server.huggingface.co/v1/
3. no version (current): https://datasets-server.huggingface.co
I think 3 is OK. Not having a version means we have to try to make everything backward-compatible, which is not a bad idea. If it's really needed, we can switch to 1 or 2 afterward. Also: having a version means that if we do breaking changes, we should maintain at least two versions in parallel...
## Envelop
A common pattern is to always return a JSON object with `data` or `error`. This way, we know that we can always consume the API with:
```js
const {data, error} = fetch(...)
```
and test for the existence of data, or error. Otherwise, every endpoint might have different behavior. Also: it's useful to have the envelop when looking at the response without knowing the HTTP status code (eg: in our cache)
Options:
1. no envelop (current): the client must rely on the HTTP status code to get the type of response (error or OK)
2. envelop: we need to migrate all the endpoints, to add an intermediate "data" or "error" field.
## HTTP status codes
We currently only use 200, 400, and 500 for simplicity. We might want to return alternative status codes such as 404 (not found), or 401/403 (when we will protect some endpoints).
Options:
1. only use 200, 400, 500 (current)
2. add more status codes, like 404, 401, 403
I think it's OK to stay with 300, 400, and 500, and let the client use the details of the response to figure out what failed.
## Error codes
Currently, the errors have a "message" field, and optionally three more fields: "cause_exception", "cause_message" and "cause_traceback". We could add a "code" field, such as "NOT_STREAMABLE", to make it more reliable for the client to implement logic based on the type of error (indeed: the message is a long string that might be updated later. A short code should be more reliable). Also: having an error code could counterbalance the lack of detailed HTTP status codes (see the previous point).
Internally, having codes could help indirect the messages to a dictionary, and it would help to catalog all the possible types of errors in the same place.
Options:
1. no "code" field (current)
2. add a "code" field, such as "NOT_STREAMABLE"
I'm in favor of adding such a short code.
## Case
The endpoints with several words are currently using "spinal-case", eg "/first-rows". An alternative is to use "snake_case", eg "/first_rows". Nothing important here.
Options:
1. "/spinal-case" (current)
2. "/snake_case"
I think it's not important, we can keep with spinal-case, and it's coherent with Hub API: https://huggingface.co/docs/hub/api
| closed | 2022-07-25T18:04:59Z | 2022-07-26T14:39:46Z | 2022-07-26T14:39:46Z | severo |
1,317,150,177 | Implement continuous delivery? | https://stackoverflow.blog/2021/12/20/fulfilling-the-promise-of-ci-cd/
I think it would work well for this project. | Implement continuous delivery?: https://stackoverflow.blog/2021/12/20/fulfilling-the-promise-of-ci-cd/
I think it would work well for this project. | closed | 2022-07-25T17:31:45Z | 2022-09-19T09:26:05Z | 2022-09-19T09:26:05Z | severo |
1,317,006,455 | Use parallelism when streaming datasets | New in [2.4.0](https://github.com/huggingface/datasets/releases/tag/2.4.0)
https://huggingface.co/docs/datasets/v2.4.0/en/use_with_pytorch#use-multiple-workers
Related to https://github.com/huggingface/datasets-server/issues/416: we will need to adapt the number of cpus allocated to every pod to the number of workers assigned to the data loader. | Use parallelism when streaming datasets: New in [2.4.0](https://github.com/huggingface/datasets/releases/tag/2.4.0)
https://huggingface.co/docs/datasets/v2.4.0/en/use_with_pytorch#use-multiple-workers
Related to https://github.com/huggingface/datasets-server/issues/416: we will need to adapt the number of cpus allocated to every pod to the number of workers assigned to the data loader. | closed | 2022-07-25T15:25:14Z | 2022-09-19T09:27:00Z | 2022-09-19T09:26:59Z | severo |
1,316,986,294 | feat: 🎸 add a script to refresh the canonical datasets | it's useful to relaunch the jobs on the canonical datasets after
upgrading the datasets library in service/worker. Indeed, the canonical
datasets are versioned in the same repo as the datasets library, and are
thus expected to work with that version. Also: every new release of
datasets triggers an update (synchro) of every canonical dataset on the
hub, which triggers the datasets-server webhook and adds a refresh job
for every canonical dataset. As they are still run with the outdated
version of the datasets library, it's good to refresh them once the
library has been upgraded | feat: 🎸 add a script to refresh the canonical datasets: it's useful to relaunch the jobs on the canonical datasets after
upgrading the datasets library in service/worker. Indeed, the canonical
datasets are versioned in the same repo as the datasets library, and are
thus expected to work with that version. Also: every new release of
datasets triggers an update (synchro) of every canonical dataset on the
hub, which triggers the datasets-server webhook and adds a refresh job
for every canonical dataset. As they are still run with the outdated
version of the datasets library, it's good to refresh them once the
library has been upgraded | closed | 2022-07-25T15:13:30Z | 2022-07-25T15:32:29Z | 2022-07-25T15:19:16Z | severo |
1,315,431,304 | refactor: 💡 move ingress to the root in values | as ingress sends to admin and to reverse proxy, not only to reverse
proxy | refactor: 💡 move ingress to the root in values: as ingress sends to admin and to reverse proxy, not only to reverse
proxy | closed | 2022-07-22T21:30:20Z | 2022-07-22T21:43:44Z | 2022-07-22T21:30:39Z | severo |
1,315,429,027 | fix: 🐛 fix domains (we had to ask for them to Route53) | null | fix: 🐛 fix domains (we had to ask for them to Route53): | closed | 2022-07-22T21:26:04Z | 2022-07-22T21:38:58Z | 2022-07-22T21:26:17Z | severo |
1,315,426,481 | fix: 🐛 remove the conflict for the admin domain bw dev and prod | null | fix: 🐛 remove the conflict for the admin domain bw dev and prod: | closed | 2022-07-22T21:21:37Z | 2022-07-22T21:35:10Z | 2022-07-22T21:21:57Z | severo |
1,315,398,190 | Create a proper domain for the admin API | Currently we use admin-datasets-server.us.dev.moon.huggingface.tech, for example:
https://admin-datasets-server.us.dev.moon.huggingface.tech/pending-jobs
It would be better to have something more directly under huggingface.tech, eg datasets-server.huggingface.tech, or datasets-server-admin.huggingface.tech.
| Create a proper domain for the admin API: Currently we use admin-datasets-server.us.dev.moon.huggingface.tech, for example:
https://admin-datasets-server.us.dev.moon.huggingface.tech/pending-jobs
It would be better to have something more directly under huggingface.tech, eg datasets-server.huggingface.tech, or datasets-server-admin.huggingface.tech.
| closed | 2022-07-22T20:37:19Z | 2022-09-06T19:51:15Z | 2022-09-06T19:51:15Z | severo |
1,315,386,930 | Move /webhook to admin instead of api? | As we've done with the technical endpoints in https://github.com/huggingface/datasets-server/pull/457?
It might help to protect the endpoint (#95), even if it's not really dangerous to let people add jobs to refresh datasets IMHO for now. | Move /webhook to admin instead of api?: As we've done with the technical endpoints in https://github.com/huggingface/datasets-server/pull/457?
It might help to protect the endpoint (#95), even if it's not really dangerous to let people add jobs to refresh datasets IMHO for now. | closed | 2022-07-22T20:21:39Z | 2022-09-16T17:24:05Z | 2022-09-16T17:24:05Z | severo |
1,315,382,368 | feat: 🎸 move two technical endpoints from api to admin | Related to https://github.com/huggingface/datasets-server/issues/95 | feat: 🎸 move two technical endpoints from api to admin: Related to https://github.com/huggingface/datasets-server/issues/95 | closed | 2022-07-22T20:15:41Z | 2022-07-22T20:35:57Z | 2022-07-22T20:22:38Z | severo |
1,315,352,316 | feat: 🎸 update docker images | null | feat: 🎸 update docker images: | closed | 2022-07-22T19:33:52Z | 2022-07-22T19:49:13Z | 2022-07-22T19:36:26Z | severo |
1,315,347,278 | what to do with /is-valid? | Currently, the endpoint /is-valid is not documented in https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json (but it is in https://github.com/huggingface/datasets-server/blob/main/services/api/README.md).
It's not used in the dataset viewer in moonlanding, but https://github.com/huggingface/model-evaluator uses it (cc @lewtun).
I have the impression that we could change this endpoint to something more precise, since "valid" is a bit loose, and will be less and less precise when other services will be added to the dataset server (statistics, random access, parquet file, etc). Instead, maybe we could create a new endpoint with more details about what services are working for the dataset. Or do we consider a dataset valid if all the services are available?
What should we do?
- [ ] keep it this way
- [ ] create a new endpoint with details of the available services
also cc @lhoestq | what to do with /is-valid?: Currently, the endpoint /is-valid is not documented in https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json (but it is in https://github.com/huggingface/datasets-server/blob/main/services/api/README.md).
It's not used in the dataset viewer in moonlanding, but https://github.com/huggingface/model-evaluator uses it (cc @lewtun).
I have the impression that we could change this endpoint to something more precise, since "valid" is a bit loose, and will be less and less precise when other services will be added to the dataset server (statistics, random access, parquet file, etc). Instead, maybe we could create a new endpoint with more details about what services are working for the dataset. Or do we consider a dataset valid if all the services are available?
What should we do?
- [ ] keep it this way
- [ ] create a new endpoint with details of the available services
also cc @lhoestq | closed | 2022-07-22T19:29:08Z | 2022-08-02T14:16:24Z | 2022-08-02T14:16:24Z | severo |
1,315,342,771 | Improve technical routes response | null | Improve technical routes response: | closed | 2022-07-22T19:22:47Z | 2022-07-22T19:42:03Z | 2022-07-22T19:29:26Z | severo |
1,315,035,515 | fix: 🐛 increase cpu limit for split worker, and reduce per ds | null | fix: 🐛 increase cpu limit for split worker, and reduce per ds: | closed | 2022-07-22T13:49:32Z | 2022-07-22T14:03:04Z | 2022-07-22T13:49:37Z | severo |
1,314,994,614 | fix: 🐛 add cpu for the first-rows worker | we had a lot of alerts "CPUThrottlingHigh", eg "43.1% throttling of
CPU". | fix: 🐛 add cpu for the first-rows worker: we had a lot of alerts "CPUThrottlingHigh", eg "43.1% throttling of
CPU". | closed | 2022-07-22T13:14:29Z | 2022-07-22T13:28:49Z | 2022-07-22T13:15:49Z | severo |
1,313,790,139 | Update grafana dashboards to /splits-next and /first-rows | To see the content of the cache | Update grafana dashboards to /splits-next and /first-rows: To see the content of the cache | closed | 2022-07-21T20:54:15Z | 2022-08-01T20:55:49Z | 2022-08-01T20:55:49Z | severo |
1,313,788,329 | Update the client (moonlanding) to use /splits-next and /first-rows | Create a PR on moonlanding to:
- [x] use the new /splits-next and /first-rows endpoints instead of /splits and /rows -> https://github.com/huggingface/moon-landing/pull/3650
- [x] adapt the design to handle better the error cases (400 -> Dataset Error, open a discussion, 500 -> Server Error, retry or open an issue) + the format has changed a bit -> not handled here, see https://github.com/huggingface/moon-landing/issues/3721
- [x] discovery of the API: show the query + link to the doc (https://huggingface.co/docs/datasets-server/api_reference), so that the users can start using the API -> not handled here, see https://github.com/huggingface/moon-landing/issues/3722
| Update the client (moonlanding) to use /splits-next and /first-rows: Create a PR on moonlanding to:
- [x] use the new /splits-next and /first-rows endpoints instead of /splits and /rows -> https://github.com/huggingface/moon-landing/pull/3650
- [x] adapt the design to handle better the error cases (400 -> Dataset Error, open a discussion, 500 -> Server Error, retry or open an issue) + the format has changed a bit -> not handled here, see https://github.com/huggingface/moon-landing/issues/3721
- [x] discovery of the API: show the query + link to the doc (https://huggingface.co/docs/datasets-server/api_reference), so that the users can start using the API -> not handled here, see https://github.com/huggingface/moon-landing/issues/3722
| closed | 2022-07-21T20:52:27Z | 2022-09-07T08:45:15Z | 2022-09-07T08:45:15Z | severo |
1,313,750,032 | docs: ✏️ nit | null | docs: ✏️ nit: | closed | 2022-07-21T20:12:08Z | 2022-07-21T20:25:15Z | 2022-07-21T20:12:13Z | severo |
1,313,746,501 | docs: ✏️ multiple fixes on the openapi spec | null | docs: ✏️ multiple fixes on the openapi spec: | closed | 2022-07-21T20:08:40Z | 2022-07-21T20:22:03Z | 2022-07-21T20:08:57Z | severo |
1,313,738,855 | Add examples for every type of feature and cell in openapi spec | It will help the users to have an idea of the different responses
https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listRows | Add examples for every type of feature and cell in openapi spec: It will help the users to have an idea of the different responses
https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listRows | closed | 2022-07-21T20:01:02Z | 2022-09-19T09:28:59Z | 2022-09-19T09:28:59Z | severo |
1,313,731,897 | check that the openapi specification is valid | It should be triggered by `make quality` in the infra/charts/datasets-server directory and quality github action | check that the openapi specification is valid: It should be triggered by `make quality` in the infra/charts/datasets-server directory and quality github action | closed | 2022-07-21T19:54:02Z | 2023-08-11T18:35:08Z | 2023-08-11T18:34:25Z | severo |
1,313,727,638 | Add two endpoints to openapi | null | Add two endpoints to openapi: | closed | 2022-07-21T19:49:50Z | 2022-07-21T20:03:15Z | 2022-07-21T19:50:39Z | severo |
1,311,788,284 | 404 improve error messages | null | 404 improve error messages: | closed | 2022-07-20T19:45:00Z | 2022-07-21T14:52:31Z | 2022-07-21T14:39:52Z | severo |
1,311,612,113 | 442 500 error if not ready | null | 442 500 error if not ready: | closed | 2022-07-20T17:57:36Z | 2022-07-20T18:38:41Z | 2022-07-20T18:26:09Z | severo |
1,311,345,357 | Return 500 error when a resource is not ready | related to #404
If a dataset has been created, and the webhook has been triggered, the dataset should be in the queue. Before a worker has completed the creation of the response for the endpoints of this dataset, if a request is received on these endpoints, we currently return 400, telling that the resource does not exist.
It's better to check the content of the queue, and return a 500 error (the server has "failed" to create the resource in time). It allows to separate this case from a request to a non-existent dataset. A 500 error means that the client can retry with the same request later.
Another option would have been to return 200, with a response that includes the state (in progress / done) and the data if any, but it would show internals of the server, and most importantly, would make the client more complicated without a reason. | Return 500 error when a resource is not ready: related to #404
If a dataset has been created, and the webhook has been triggered, the dataset should be in the queue. Before a worker has completed the creation of the response for the endpoints of this dataset, if a request is received on these endpoints, we currently return 400, telling that the resource does not exist.
It's better to check the content of the queue, and return a 500 error (the server has "failed" to create the resource in time). It allows to separate this case from a request to a non-existent dataset. A 500 error means that the client can retry with the same request later.
Another option would have been to return 200, with a response that includes the state (in progress / done) and the data if any, but it would show internals of the server, and most importantly, would make the client more complicated without a reason. | closed | 2022-07-20T15:28:01Z | 2022-07-20T18:26:11Z | 2022-07-20T18:26:10Z | severo |
1,310,072,384 | Opensource the Hub dataset viewer | The dataset viewer should be embeddable as an iframe on other websites.
The code should be extracted from moonlanding.
It will be an example of API client, and it will help foster contributions. | Opensource the Hub dataset viewer: The dataset viewer should be embeddable as an iframe on other websites.
The code should be extracted from moonlanding.
It will be an example of API client, and it will help foster contributions. | closed | 2022-07-19T21:38:54Z | 2022-09-19T09:27:17Z | 2022-09-19T09:27:17Z | severo |
1,310,069,979 | Improve the developer experience to run the services | Currently, running the services from docker images is easy but running from the python code is not that simple: we have to go to the directory and launch `make run`, **after** having launched the required mongo db instance for example, and having specified all the environment variables in a `.env` file.
I think we could make it simpler and better documented. | Improve the developer experience to run the services: Currently, running the services from docker images is easy but running from the python code is not that simple: we have to go to the directory and launch `make run`, **after** having launched the required mongo db instance for example, and having specified all the environment variables in a `.env` file.
I think we could make it simpler and better documented. | closed | 2022-07-19T21:35:45Z | 2022-09-16T17:26:58Z | 2022-09-16T17:26:58Z | severo |
1,310,068,106 | Associate each request with an ID to help debug | The ID must be added to the logs and to the errors returned by the API
Include in the responses with the header `X-Request-Id`. See https://github.com/huggingface/datasets-server/issues/466#issuecomment-1195528239 | Associate each request with an ID to help debug: The ID must be added to the logs and to the errors returned by the API
Include in the responses with the header `X-Request-Id`. See https://github.com/huggingface/datasets-server/issues/466#issuecomment-1195528239 | closed | 2022-07-19T21:33:21Z | 2022-09-19T09:28:27Z | 2022-09-19T09:28:27Z | severo |
1,308,472,893 | Use main instead of master to load the datasets | The main branch in https://github.com/huggingface/datasets is now `main`, not `master` anymore. Note that it's backward compatible, so no need to hurry | Use main instead of master to load the datasets: The main branch in https://github.com/huggingface/datasets is now `main`, not `master` anymore. Note that it's backward compatible, so no need to hurry | closed | 2022-07-18T19:46:43Z | 2022-07-26T16:21:59Z | 2022-07-26T16:21:59Z | severo |
1,308,273,488 | 401 error "Unauthorized" when accessing a CSV file | See https://github.com/huggingface/datasets/issues/4707
```
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/TheNoob3131/mosquito-data/resolve/8aceebd6c4a359d216d10ef020868bd9e8c986dd/0_Africa_train.csv')
```
I don't understand why we have this kind of error from the hub. I have asked for details here: https://github.com/huggingface/datasets/issues/4707#issuecomment-1187819353, but I'm wondering what kind of state could have led to this error. If the dataset was private it would not have triggered the creation of the splits. If it was gated, it should have worked because the datasets-server can access the gated datasets. | 401 error "Unauthorized" when accessing a CSV file: See https://github.com/huggingface/datasets/issues/4707
```
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/TheNoob3131/mosquito-data/resolve/8aceebd6c4a359d216d10ef020868bd9e8c986dd/0_Africa_train.csv')
```
I don't understand why we have this kind of error from the hub. I have asked for details here: https://github.com/huggingface/datasets/issues/4707#issuecomment-1187819353, but I'm wondering what kind of state could have led to this error. If the dataset was private it would not have triggered the creation of the splits. If it was gated, it should have worked because the datasets-server can access the gated datasets. | closed | 2022-07-18T17:20:42Z | 2022-09-16T17:29:08Z | 2022-09-16T17:29:08Z | severo |
1,308,164,916 | Catch all the exceptions and return the expected error format | See https://github.com/huggingface/datasets/issues/4596.
The endpoint https://datasets-server.huggingface.co/splits?dataset=universal_dependencies returns:
```
Internal Server Error
```
instead of a JSON.
It results in moon-landing showing a weird and unrelated error (it expects a JSON, it gets a text content):
> invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
<img width="804" alt="Capture d’écran 2022-07-18 à 11 55 26" src="https://user-images.githubusercontent.com/1676121/179552382-2218bf9e-9a7e-4552-8440-428632f574fe.png">
| Catch all the exceptions and return the expected error format: See https://github.com/huggingface/datasets/issues/4596.
The endpoint https://datasets-server.huggingface.co/splits?dataset=universal_dependencies returns:
```
Internal Server Error
```
instead of a JSON.
It results in moon-landing showing a weird and unrelated error (it expects a JSON, it gets a text content):
> invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
<img width="804" alt="Capture d’écran 2022-07-18 à 11 55 26" src="https://user-images.githubusercontent.com/1676121/179552382-2218bf9e-9a7e-4552-8440-428632f574fe.png">
| closed | 2022-07-18T15:56:25Z | 2022-07-21T14:40:36Z | 2022-07-21T14:39:53Z | severo |
1,308,161,856 | Error on /splits endpoint due to Mongo memory | Error in the logs while accessing https://datasets-server.huggingface.co/splits?dataset=universal_dependencies
```
pymongo.errors.OperationFailure: error while multiplanner was selecting best plan :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting., full error: {'ok': 0.0, 'errmsg': 'error while multiplanner was selecting best plan :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.', 'code': 292, 'codeName': 'QueryExceededMemoryLimitNoDiskUseAllowed', '$clusterTime': {'clusterTime': Timestamp(1658129785, 1), 'signature': {'hash': b'="1\x9b\x8cw5{mE\xb1L#\xb6\x83iE\xc7ju', 'keyId': 7077944093346627589}}, 'operationTime': Timestamp(1658129785, 1)}
```
See https://github.com/huggingface/datasets/issues/4596 | Error on /splits endpoint due to Mongo memory: Error in the logs while accessing https://datasets-server.huggingface.co/splits?dataset=universal_dependencies
```
pymongo.errors.OperationFailure: error while multiplanner was selecting best plan :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting., full error: {'ok': 0.0, 'errmsg': 'error while multiplanner was selecting best plan :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.', 'code': 292, 'codeName': 'QueryExceededMemoryLimitNoDiskUseAllowed', '$clusterTime': {'clusterTime': Timestamp(1658129785, 1), 'signature': {'hash': b'="1\x9b\x8cw5{mE\xb1L#\xb6\x83iE\xc7ju', 'keyId': 7077944093346627589}}, 'operationTime': Timestamp(1658129785, 1)}
```
See https://github.com/huggingface/datasets/issues/4596 | closed | 2022-07-18T15:53:48Z | 2022-09-07T11:30:28Z | 2022-09-07T11:30:27Z | severo |
1,308,069,742 | upgrade datasets | Current issues that would be solved:
- https://github.com/huggingface/datasets/issues/4671
- https://github.com/huggingface/datasets/issues/4477
- https://huggingface.co/datasets/chrisjay/mnist-adversarial-dataset does not work
A new release should appear this week: https://huggingface.slack.com/archives/C031T8QME5N/p1658171737694089?thread_ts=1656963612.494119&cid=C031T8QME5N (promised by @mariosasko 😛 ) | upgrade datasets: Current issues that would be solved:
- https://github.com/huggingface/datasets/issues/4671
- https://github.com/huggingface/datasets/issues/4477
- https://huggingface.co/datasets/chrisjay/mnist-adversarial-dataset does not work
A new release should appear this week: https://huggingface.slack.com/archives/C031T8QME5N/p1658171737694089?thread_ts=1656963612.494119&cid=C031T8QME5N (promised by @mariosasko 😛 ) | closed | 2022-07-18T14:42:52Z | 2022-07-25T20:38:07Z | 2022-07-25T20:38:07Z | severo |
1,299,864,327 | wording tweak | null | wording tweak: | closed | 2022-07-10T08:40:53Z | 2022-07-19T20:40:00Z | 2022-07-19T20:26:41Z | julien-c |
1,295,972,932 | Dataset preview rounds numbers | When previewing a dataset using the dataset viewer, e.g. : https://huggingface.co/datasets/sasha/real_toxicity_prompts
Values close to 1 (e.g. 0.9999) are rounded up in the UI, which is a bit confusing.
@lhoestq proposed that maybe the function to use here would be `floor(value)`, not `round(value)`
For reference, the original dataset is:
![image](https://user-images.githubusercontent.com/14205986/177581736-84d28059-cb1d-4d8e-b572-842fc6689d4a.png)
| Dataset preview rounds numbers: When previewing a dataset using the dataset viewer, e.g. : https://huggingface.co/datasets/sasha/real_toxicity_prompts
Values close to 1 (e.g. 0.9999) are rounded up in the UI, which is a bit confusing.
@lhoestq proposed that maybe the function to use here would be `floor(value)`, not `round(value)`
For reference, the original dataset is:
![image](https://user-images.githubusercontent.com/14205986/177581736-84d28059-cb1d-4d8e-b572-842fc6689d4a.png)
| closed | 2022-07-06T15:01:47Z | 2022-07-07T14:41:38Z | 2022-07-07T08:58:01Z | sashavor |
1,290,333,873 | Add /first-rows endpoint | null | Add /first-rows endpoint: | closed | 2022-06-30T15:50:22Z | 2022-07-19T21:09:23Z | 2022-07-19T20:56:10Z | severo |
1,289,785,403 | Shuffle the rows? | see https://github.com/huggingface/moon-landing/issues/3375 | Shuffle the rows?: see https://github.com/huggingface/moon-landing/issues/3375 | closed | 2022-06-30T08:31:20Z | 2023-09-08T13:41:42Z | 2023-09-08T13:41:42Z | severo |
1,289,779,412 | Protect the gated datasets | https://github.com/huggingface/autonlp-backend/issues/598#issuecomment-1170917568
Currently, the gated datasets are freely available through the API, see: https://datasets-server.huggingface.co/splits?dataset=imagenet-1k and https://datasets-server.huggingface.co/rows?dataset=imagenet-1k&config=default&split=train
Should we protect them, requiring the call to provide a token or a cookie, and checking live that they have the right to access it, as it's done for the tensorboard tab of private models?
I have the intuition that the list of splits and the first 100 rows are more like metadata than data, and therefore can be treated like the dataset card and be public, but I might be wrong. | Protect the gated datasets: https://github.com/huggingface/autonlp-backend/issues/598#issuecomment-1170917568
Currently, the gated datasets are freely available through the API, see: https://datasets-server.huggingface.co/splits?dataset=imagenet-1k and https://datasets-server.huggingface.co/rows?dataset=imagenet-1k&config=default&split=train
Should we protect them, requiring the call to provide a token or a cookie, and checking live that they have the right to access it, as it's done for the tensorboard tab of private models?
I have the intuition that the list of splits and the first 100 rows are more like metadata than data, and therefore can be treated like the dataset card and be public, but I might be wrong. | closed | 2022-06-30T08:26:18Z | 2022-08-05T21:28:51Z | 2022-08-05T21:28:51Z | severo |
1,288,748,989 | in first-rows: change "columns" for "features" | In order to rely on the https://github.com/huggingface/datasets library instead of maintaining a layer in datasets-server, we will directly return the features.
This way, the vocabulary / types will be maintained in `datasets` and we will have a better consistency.
Also note that type inference, which wasn't implemented in `datasets`, is now available (for streaming, which is ok here): https://github.com/huggingface/datasets/blob/f5826eff9b06ab10dba1adfa52543341ef1e6009/src/datasets/iterable_dataset.py#L1255.
This means that we will have to adapt the client (https://github.com/huggingface/moon-landing/blob/main/server/lib/DatasetApiClient.ts and https://github.com/huggingface/moon-landing/blob/main/server/lib/DatasetViewer.ts) as well. | in first-rows: change "columns" for "features": In order to rely on the https://github.com/huggingface/datasets library instead of maintaining a layer in datasets-server, we will directly return the features.
This way, the vocabulary / types will be maintained in `datasets` and we will have a better consistency.
Also note that type inference, which wasn't implemented in `datasets`, is now available (for streaming, which is ok here): https://github.com/huggingface/datasets/blob/f5826eff9b06ab10dba1adfa52543341ef1e6009/src/datasets/iterable_dataset.py#L1255.
This means that we will have to adapt the client (https://github.com/huggingface/moon-landing/blob/main/server/lib/DatasetApiClient.ts and https://github.com/huggingface/moon-landing/blob/main/server/lib/DatasetViewer.ts) as well. | closed | 2022-06-29T13:42:40Z | 2022-07-19T21:23:10Z | 2022-07-19T21:23:10Z | severo |
1,288,739,699 | Deprecate /rows, and replace /splits with the current /splits-next | This will make it clearer that we only return the first rows. The idea is to keep this endpoint even when random access will be available (#13):
- /first-rows will use streaming to give access to the first rows very quickly after a dataset has been created or updated
- /rows will allow access to all the rows but will only be available once the dataset has been downloaded
| Deprecate /rows, and replace /splits with the current /splits-next: This will make it clearer that we only return the first rows. The idea is to keep this endpoint even when random access will be available (#13):
- /first-rows will use streaming to give access to the first rows very quickly after a dataset has been created or updated
- /rows will allow access to all the rows but will only be available once the dataset has been downloaded
| closed | 2022-06-29T13:36:35Z | 2022-09-07T11:58:49Z | 2022-09-07T11:58:49Z | severo |
1,288,410,440 | feat: 🎸 publish openapi.json from the reverse proxy | at the root: /openapi.json | feat: 🎸 publish openapi.json from the reverse proxy: at the root: /openapi.json | closed | 2022-06-29T09:14:07Z | 2022-06-29T09:29:17Z | 2022-06-29T09:18:13Z | severo |
1,288,054,622 | Explorer shouldn't fail if one of the dataset configs require manual download | As of now, if one of the configs for the dataset require manual download but rest of the ones work fine, the explorer fails through 400 error for all of the configs. The case in consideration is https://huggingface.co/datasets/facebook/pmd where we have a subset "flickr30k" which is not downloaded by default. You have to specifically pass `use_flickr30k` to the main config to load `flickr30k`. I added a specific config for flickr30k with `use_flickr30k` already set to true and the explore failed. See the commit for reverting the change for exact code for the config: https://huggingface.co/datasets/facebook/pmd/commit/f876c78543d548e59d60c77a363cc3d2138d1319 | Explorer shouldn't fail if one of the dataset configs require manual download: As of now, if one of the configs for the dataset require manual download but rest of the ones work fine, the explorer fails through 400 error for all of the configs. The case in consideration is https://huggingface.co/datasets/facebook/pmd where we have a subset "flickr30k" which is not downloaded by default. You have to specifically pass `use_flickr30k` to the main config to load `flickr30k`. I added a specific config for flickr30k with `use_flickr30k` already set to true and the explore failed. See the commit for reverting the change for exact code for the config: https://huggingface.co/datasets/facebook/pmd/commit/f876c78543d548e59d60c77a363cc3d2138d1319 | closed | 2022-06-29T01:06:01Z | 2022-09-16T20:01:13Z | 2022-09-16T20:01:08Z | apsdehal |
1,287,589,379 | Create the OpenAPI spec | null | Create the OpenAPI spec: | closed | 2022-06-28T16:18:28Z | 2022-06-28T16:30:06Z | 2022-06-28T16:19:22Z | severo |
1,287,199,028 | Add terms of service to the API? | See https://swagger.io/specification/#info-object
Maybe to mention a rate-limiter, if we implement one | Add terms of service to the API?: See https://swagger.io/specification/#info-object
Maybe to mention a rate-limiter, if we implement one | closed | 2022-06-28T11:27:16Z | 2022-09-16T17:30:38Z | 2022-09-16T17:30:38Z | severo |
1,285,791,289 | Create the Hugging Face doc for datasets-server | See https://huggingface.slack.com/archives/C02GLJ5S0E9/p1654164421919969
I. Creating a new library & docs for it
- [x] Create a docs/source folder under which you’ll have your docs and _toctree.yml. Example [here](https://github.com/huggingface/datasets/blob/master/docs/source/_toctree.yml).
- [x] give [HuggingFaceDocBuilderDev](https://github.com/HuggingFaceDocBuilderDev) read access to the repository in order to post comments to the PR (see https://github.com/huggingface/api-inference/pull/799)
- [x] Need to setup 3 GitHub Actions workflows, as you can see [here](https://github.com/huggingface/accelerate/tree/main/.github/workflows) for accelerate. These depend from the doc-builder workflow, and are the following. You should be able to copy/paste 95% of it, just updating the library name.
- `build_documentation.yml`
- `build_pr_documentation.yml`
- `delete_doc_comment.yml`
- [x] update https://moon-ci-docs.huggingface.co/ to support new lib (mystery step by @mishig25, see https://huggingface.slack.com/archives/C03969VA7NK/p1656337784008969?thread_ts=1656327306.784249&cid=C03969VA7NK)
- [x] Setup a `HUGGINGFACE_PUSH` secret which will allow the jobs to push to the doc-build repository. Tag @LysandreJik
- [x] Create a `{library}/_versions.yml` in [doc-build repo](https://github.com/huggingface/doc-build). Example [here](https://github.com/huggingface/doc-build/blob/main/transformers/_versions.yml) - see https://github.com/huggingface/doc-build/pull/27
- [x] add library crawler in docsearch. Example [here](https://github.com/huggingface/huggingface-meilisearch/pull/28) - see https://github.com/huggingface/huggingface-meilisearch/pull/30
- [x] Add `{library}` to the supported libraries for the backend. See example [PR](https://github.com/huggingface/moon-landing/pull/2443) for huggingface_hub - see https://github.com/huggingface/moon-landing/pull/3349
- [x] Add `{library}` to the documentation page. See example [PR](https://github.com/huggingface/moon-landing/pull/2518) for huggingface_hub - see https://github.com/huggingface/moon-landing/pull/3349
- [x] Make sure the thumbnail exists [here](https://github.com/huggingface/moon-landing/tree/main/front/thumbnails/docs) - see https://github.com/huggingface/moon-landing/pull/3349
II. Changing layout of a doc (or migration from Sphinx)
- [x] Make sure to redirect all old/broken doc links to new/updated doc links. See more [here](https://huggingface.slack.com/archives/C03969VA7NK/p1654163476565859)
III. Writing docs:
- [x] For syntax, read doc-builder README [here](https://github.com/huggingface/doc-builder#readme). We use a custom syntax that is: markdown + html + custom directives for autodoc (if you face any issue, tag @sgugger @mishig25)
- [x] Upload your assets/imgs to [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images/tree/main)
- [x] Read [documentation style guide](https://www.notion.so/huggingface2/Hugging-Face-documentation-style-guide-ef64d469b4df4bea9d217101e1de96d0) from @stevhliu
- [x] Use [doc-builder preview](https://github.com/huggingface/doc-builder#previewing) cmd while writing docs for faster feedback (tag @mishig25 for any issues) | Create the Hugging Face doc for datasets-server: See https://huggingface.slack.com/archives/C02GLJ5S0E9/p1654164421919969
I. Creating a new library & docs for it
- [x] Create a docs/source folder under which you’ll have your docs and _toctree.yml. Example [here](https://github.com/huggingface/datasets/blob/master/docs/source/_toctree.yml).
- [x] give [HuggingFaceDocBuilderDev](https://github.com/HuggingFaceDocBuilderDev) read access to the repository in order to post comments to the PR (see https://github.com/huggingface/api-inference/pull/799)
- [x] Need to setup 3 GitHub Actions workflows, as you can see [here](https://github.com/huggingface/accelerate/tree/main/.github/workflows) for accelerate. These depend from the doc-builder workflow, and are the following. You should be able to copy/paste 95% of it, just updating the library name.
- `build_documentation.yml`
- `build_pr_documentation.yml`
- `delete_doc_comment.yml`
- [x] update https://moon-ci-docs.huggingface.co/ to support new lib (mystery step by @mishig25, see https://huggingface.slack.com/archives/C03969VA7NK/p1656337784008969?thread_ts=1656327306.784249&cid=C03969VA7NK)
- [x] Setup a `HUGGINGFACE_PUSH` secret which will allow the jobs to push to the doc-build repository. Tag @LysandreJik
- [x] Create a `{library}/_versions.yml` in [doc-build repo](https://github.com/huggingface/doc-build). Example [here](https://github.com/huggingface/doc-build/blob/main/transformers/_versions.yml) - see https://github.com/huggingface/doc-build/pull/27
- [x] add library crawler in docsearch. Example [here](https://github.com/huggingface/huggingface-meilisearch/pull/28) - see https://github.com/huggingface/huggingface-meilisearch/pull/30
- [x] Add `{library}` to the supported libraries for the backend. See example [PR](https://github.com/huggingface/moon-landing/pull/2443) for huggingface_hub - see https://github.com/huggingface/moon-landing/pull/3349
- [x] Add `{library}` to the documentation page. See example [PR](https://github.com/huggingface/moon-landing/pull/2518) for huggingface_hub - see https://github.com/huggingface/moon-landing/pull/3349
- [x] Make sure the thumbnail exists [here](https://github.com/huggingface/moon-landing/tree/main/front/thumbnails/docs) - see https://github.com/huggingface/moon-landing/pull/3349
II. Changing layout of a doc (or migration from Sphinx)
- [x] Make sure to redirect all old/broken doc links to new/updated doc links. See more [here](https://huggingface.slack.com/archives/C03969VA7NK/p1654163476565859)
III. Writing docs:
- [x] For syntax, read doc-builder README [here](https://github.com/huggingface/doc-builder#readme). We use a custom syntax that is: markdown + html + custom directives for autodoc (if you face any issue, tag @sgugger @mishig25)
- [x] Upload your assets/imgs to [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images/tree/main)
- [x] Read [documentation style guide](https://www.notion.so/huggingface2/Hugging-Face-documentation-style-guide-ef64d469b4df4bea9d217101e1de96d0) from @stevhliu
- [x] Use [doc-builder preview](https://github.com/huggingface/doc-builder#previewing) cmd while writing docs for faster feedback (tag @mishig25 for any issues) | closed | 2022-06-27T13:13:54Z | 2022-06-28T09:42:37Z | 2022-06-28T09:38:08Z | severo |
1,285,612,584 | feat: 🎸 add basis for the docs | null | feat: 🎸 add basis for the docs: | closed | 2022-06-27T10:45:24Z | 2022-06-28T08:51:36Z | 2022-06-28T08:40:52Z | severo |
1,285,567,104 | Add a license (and copyright headers to the files)? | null | Add a license (and copyright headers to the files)?: | closed | 2022-06-27T10:07:56Z | 2022-09-19T09:24:20Z | 2022-09-19T09:24:19Z | severo |
1,285,530,183 | Add docstrings | See https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html (used by datasets and transformers) | Add docstrings: See https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html (used by datasets and transformers) | closed | 2022-06-27T09:40:40Z | 2022-09-19T09:21:21Z | 2022-09-19T09:21:21Z | severo |
1,282,569,978 | fix: 🐛 set the modules cache inside /tmp | by default, it was created in `/.cache` which cannot be written. | fix: 🐛 set the modules cache inside /tmp: by default, it was created in `/.cache` which cannot be written. | closed | 2022-06-23T15:14:05Z | 2022-06-23T15:14:12Z | 2022-06-23T15:14:11Z | severo |
1,282,523,236 | split the dependencies of the worker service to install less in the CI | Possibly the tests and code quality don't need to install all the dependencies in the worker service. If we install less dependencies, we will reduce the time.
idea by @lhoestq https://github.com/huggingface/datasets-server/issues/259#issuecomment-1164469775 | split the dependencies of the worker service to install less in the CI: Possibly the tests and code quality don't need to install all the dependencies in the worker service. If we install less dependencies, we will reduce the time.
idea by @lhoestq https://github.com/huggingface/datasets-server/issues/259#issuecomment-1164469775 | closed | 2022-06-23T14:41:50Z | 2022-09-19T09:27:53Z | 2022-09-19T09:27:53Z | severo |
1,282,323,235 | Remove the Kubernetes CPU "limits"? | https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-%28Prometheus-Alert%29#why-you-dont-need-cpu-limits
> ## Why you don't need CPU limits
>
> As long as your pod has a CPU request, [Kubernetes maintainers like Tim Hockin recommend not using limits at all](https://twitter.com/thockin/status/1134193838841401345). This way pods are free to use spare CPU instead of letting the CPU stay idle.
>
> Contrary to common belief, [even if you remove this pod's CPU limit, other pods are still guaranteed the CPU they requested](https://github.com/kubernetes/design-proposals-archive/blob/8da1442ea29adccea40693357d04727127e045ed/node/resource-qos.md#compressible-resource-guaranteess). The CPU limit only effects how spare CPU is distributed. | Remove the Kubernetes CPU "limits"?: https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-%28Prometheus-Alert%29#why-you-dont-need-cpu-limits
> ## Why you don't need CPU limits
>
> As long as your pod has a CPU request, [Kubernetes maintainers like Tim Hockin recommend not using limits at all](https://twitter.com/thockin/status/1134193838841401345). This way pods are free to use spare CPU instead of letting the CPU stay idle.
>
> Contrary to common belief, [even if you remove this pod's CPU limit, other pods are still guaranteed the CPU they requested](https://github.com/kubernetes/design-proposals-archive/blob/8da1442ea29adccea40693357d04727127e045ed/node/resource-qos.md#compressible-resource-guaranteess). The CPU limit only effects how spare CPU is distributed. | closed | 2022-06-23T12:26:39Z | 2022-07-22T13:15:41Z | 2022-07-22T13:15:41Z | severo |
1,282,190,914 | Expose an endpoint with the column types/modalities of each dataset? | It could be used on the Hub to find all the "images" or "audio" datasets.
By the way, the info is normally already in the datasets-info.json (.features) | Expose an endpoint with the column types/modalities of each dataset?: It could be used on the Hub to find all the "images" or "audio" datasets.
By the way, the info is normally already in the datasets-info.json (.features) | closed | 2022-06-23T10:36:01Z | 2022-09-16T17:32:45Z | 2022-09-16T17:32:45Z | severo |
1,282,139,489 | Don't share the cache for the datasets modules | Also: don't redownload everytime | Don't share the cache for the datasets modules: Also: don't redownload everytime | closed | 2022-06-23T09:57:11Z | 2022-06-23T10:27:49Z | 2022-06-23T10:18:56Z | severo |
1,279,670,404 | URL design | Currently, the API is available at the root, ie: https://datasets-server.huggingface.co/rows?...
This can lead to some issues:
- if we add other services, such as /doc or /search, the API will share the namespace with these other services. This means that we must take care of avoiding collisions between services and endpoints (I think it's OK), and that we cannot simply delegate a subroute to the `api` service (not really an issue either because we "just" have to treat all the other services first in the nginx config, then send the rest to the `api` service)
- version: if we break the API one day, we might want to serve two versions of the API, namely v1 and v2. Notes: 1. it's better not to break the API, 2. if we create a v2 API, we can still namespace it under /v2/, so: not really an issue
Which one do you prefer?
1. https://datasets-server.huggingface.co/ (current)
2. https://datasets-server.huggingface.co/api/
3. https://datasets-server.huggingface.co/api/v1/
| URL design: Currently, the API is available at the root, ie: https://datasets-server.huggingface.co/rows?...
This can lead to some issues:
- if we add other services, such as /doc or /search, the API will share the namespace with these other services. This means that we must take care of avoiding collisions between services and endpoints (I think it's OK), and that we cannot simply delegate a subroute to the `api` service (not really an issue either because we "just" have to treat all the other services first in the nginx config, then send the rest to the `api` service)
- version: if we break the API one day, we might want to serve two versions of the API, namely v1 and v2. Notes: 1. it's better not to break the API, 2. if we create a v2 API, we can still namespace it under /v2/, so: not really an issue
Which one do you prefer?
1. https://datasets-server.huggingface.co/ (current)
2. https://datasets-server.huggingface.co/api/
3. https://datasets-server.huggingface.co/api/v1/
| closed | 2022-06-22T07:13:24Z | 2022-06-28T08:48:02Z | 2022-06-28T08:48:02Z | severo |
1,279,636,405 | Fix stale | I erroneously closed #411 | Fix stale: I erroneously closed #411 | closed | 2022-06-22T06:46:59Z | 2022-06-22T06:48:29Z | 2022-06-22T06:47:39Z | severo |
1,279,577,258 | Fix stale | null | Fix stale: | closed | 2022-06-22T05:44:58Z | 2022-06-22T06:48:29Z | 2022-06-22T06:48:28Z | severo |
1,278,492,268 | Fallback to other image formats if JPEG generation fails | Fix #191 | Fallback to other image formats if JPEG generation fails: Fix #191 | closed | 2022-06-21T13:56:56Z | 2022-06-21T16:24:55Z | 2022-06-21T16:24:54Z | mariosasko |
1,278,491,506 | Revert two commits | null | Revert two commits: | closed | 2022-06-21T13:56:21Z | 2022-06-21T14:57:07Z | 2022-06-21T14:57:07Z | severo |
1,278,394,142 | feat: 🎸 revert docker images to previous state | until https://github.com/huggingface/datasets-server/issues/407 is
implemented. When it will be implemented, the migration from stalled to
stale (https://github.com/huggingface/datasets-server/issues/368) will
be done automatically | feat: 🎸 revert docker images to previous state: until https://github.com/huggingface/datasets-server/issues/407 is
implemented. When it will be implemented, the migration from stalled to
stale (https://github.com/huggingface/datasets-server/issues/368) will
be done automatically | closed | 2022-06-21T12:43:36Z | 2022-06-21T13:30:31Z | 2022-06-21T13:30:30Z | severo |
1,278,191,414 | Improve the management of database migrations | Ideally, we would keep the track of all the migrations, inside the database, and run all the pending migrations on every start of any of the services (in reality: when we connect to the database, before starting to use it). Beware: we have to use a lock to avoid multiple migrations running at the same time.
See https://github.com/huggingface/datasets-server/tree/main/libs/libcache/migrations | Improve the management of database migrations: Ideally, we would keep the track of all the migrations, inside the database, and run all the pending migrations on every start of any of the services (in reality: when we connect to the database, before starting to use it). Beware: we have to use a lock to avoid multiple migrations running at the same time.
See https://github.com/huggingface/datasets-server/tree/main/libs/libcache/migrations | closed | 2022-06-21T09:50:32Z | 2022-11-15T14:25:13Z | 2022-09-19T09:02:46Z | severo |
1,278,149,132 | fix: 🐛 rename "stalled" into "stale" | fixes https://github.com/huggingface/datasets-server/issues/368 | fix: 🐛 rename "stalled" into "stale": fixes https://github.com/huggingface/datasets-server/issues/368 | closed | 2022-06-21T09:17:34Z | 2022-06-21T11:53:07Z | 2022-06-21T11:53:07Z | severo |
1,278,128,914 | feat: 🎸 increase the log verbosity to help debug | To be used in https://kibana.elastic.huggingface.tech | feat: 🎸 increase the log verbosity to help debug: To be used in https://kibana.elastic.huggingface.tech | closed | 2022-06-21T09:01:13Z | 2022-06-21T09:01:32Z | 2022-06-21T09:01:31Z | severo |
1,278,042,095 | Improve the error messages | In a lot of cases, when the dataset viewer has an error, the error message is not clear at all, or exposes internals of the project which are not important for the user, etc.
We should aim at providing information tailored for the Hub user:
- the error comes from the repo: what can they do to fix the error?
- the error comes from the server:
- it's normal, just wait ... minutes before trying again
- it's not normal, report here: ...
| Improve the error messages: In a lot of cases, when the dataset viewer has an error, the error message is not clear at all, or exposes internals of the project which are not important for the user, etc.
We should aim at providing information tailored for the Hub user:
- the error comes from the repo: what can they do to fix the error?
- the error comes from the server:
- it's normal, just wait ... minutes before trying again
- it's not normal, report here: ...
| closed | 2022-06-21T07:54:10Z | 2022-07-21T14:39:54Z | 2022-07-21T14:39:53Z | severo |
1,277,152,700 | The logs are not shown in elastic search | See https://github.com/huggingface/datasets-server/issues/401#issuecomment-1160627942 | The logs are not shown in elastic search: See https://github.com/huggingface/datasets-server/issues/401#issuecomment-1160627942 | closed | 2022-06-20T16:21:52Z | 2022-09-16T17:33:16Z | 2022-09-16T17:33:15Z | severo |
1,276,957,285 | Create a doc | Should the datasets-server doc be a specific item in https://huggingface.co/docs? <strike>Or part of another doc, ie Hub or Datasets?</strike>
See https://github.com/huggingface/doc-builder for the doc builder. | Create a doc: Should the datasets-server doc be a specific item in https://huggingface.co/docs? <strike>Or part of another doc, ie Hub or Datasets?</strike>
See https://github.com/huggingface/doc-builder for the doc builder. | closed | 2022-06-20T13:46:30Z | 2022-06-29T09:21:21Z | 2022-06-29T09:21:21Z | severo |
1,276,672,948 | The dataset does not exist | See https://github.com/huggingface/datasets/issues/4527
https://huggingface.co/datasets/vadis/sv-ident
Obviously, it's wrong since the dataset exists. It surely does not exist in the cache database, which is a bug, maybe due to some failure with the webhook (from moonlanding, or from the datasets server?)
<img width="1113" alt="Capture d’écran 2022-06-20 à 11 57 13" src="https://user-images.githubusercontent.com/1676121/174577295-5b7f0428-3a28-4a89-9949-e86211160875.png">
thanks @albertvillanova for reporting | The dataset does not exist: See https://github.com/huggingface/datasets/issues/4527
https://huggingface.co/datasets/vadis/sv-ident
Obviously, it's wrong since the dataset exists. It surely does not exist in the cache database, which is a bug, maybe due to some failure with the webhook (from moonlanding, or from the datasets server?)
<img width="1113" alt="Capture d’écran 2022-06-20 à 11 57 13" src="https://user-images.githubusercontent.com/1676121/174577295-5b7f0428-3a28-4a89-9949-e86211160875.png">
thanks @albertvillanova for reporting | closed | 2022-06-20T09:58:18Z | 2022-06-21T08:18:08Z | 2022-06-21T08:18:08Z | severo |
1,276,583,890 | Remove secrets if any | null | Remove secrets if any: | closed | 2022-06-20T08:47:58Z | 2022-09-19T10:02:22Z | 2022-09-19T10:02:21Z | severo |
1,276,582,685 | Improve onboarding | - [ ] doc
- [ ] readme
- [ ] install
- [ ] contributing
- [ ] dev environment
- [x] vscode workspace
- [ ] github codespace - see #373
- [ ] build:
- [ ] locally
- [ ] CI
- [ ] run/deploy:
- [ ] locally during development
- [ ] with docker
- [ ] with kubernetes
| Improve onboarding: - [ ] doc
- [ ] readme
- [ ] install
- [ ] contributing
- [ ] dev environment
- [x] vscode workspace
- [ ] github codespace - see #373
- [ ] build:
- [ ] locally
- [ ] CI
- [ ] run/deploy:
- [ ] locally during development
- [ ] with docker
- [ ] with kubernetes
| closed | 2022-06-20T08:46:59Z | 2022-09-19T09:02:07Z | 2022-09-19T09:02:07Z | severo |
1,275,203,002 | Publish a parquet file for every dataset on the Hub | To be able to apply specific processes as: stats, or random access, we need to first download the datasets on the disk.
Possibly in the parquet format.
One part will be implemented in the `datasets` library, but we also have challenges in the datasets-server project: infrastructure, workers | Publish a parquet file for every dataset on the Hub: To be able to apply specific processes as: stats, or random access, we need to first download the datasets on the disk.
Possibly in the parquet format.
One part will be implemented in the `datasets` library, but we also have challenges in the datasets-server project: infrastructure, workers | closed | 2022-06-17T15:50:12Z | 2022-12-13T10:41:22Z | 2022-12-13T10:41:22Z | severo |
1,275,200,557 | Define and document the serialization format for the columns and the data | Related to `datasets`.
Currently, `datasets` defines the columns as "features" (in the dataset-info.json file), and the values are native objects, not necessarily dicts.
We need to_dict and from_dict, both for the columns and the data.
It's what's done in https://github.com/huggingface/datasets-server/tree/main/services/worker/src/worker/models/column, but possibly this part should go to the `datasets` library.
eg. how do we serialize a Timestamp column and value? See https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/models/column/timestamp.py for the current way we implemented in datasets-server. | Define and document the serialization format for the columns and the data: Related to `datasets`.
Currently, `datasets` defines the columns as "features" (in the dataset-info.json file), and the values are native objects, not necessarily dicts.
We need to_dict and from_dict, both for the columns and the data.
It's what's done in https://github.com/huggingface/datasets-server/tree/main/services/worker/src/worker/models/column, but possibly this part should go to the `datasets` library.
eg. how do we serialize a Timestamp column and value? See https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/models/column/timestamp.py for the current way we implemented in datasets-server. | closed | 2022-06-17T15:47:34Z | 2022-07-26T14:41:01Z | 2022-07-26T14:41:01Z | severo |
1,275,190,481 | Compute metrics about datasets similarity | It would be useful to find, for a given dataset, which are the nearest datasets in relation to their content.
| Compute metrics about datasets similarity: It would be useful to find, for a given dataset, which are the nearest datasets in relation to their content.
| open | 2022-06-17T15:36:20Z | 2024-06-19T14:01:46Z | null | severo |
1,274,933,071 | Improve how we manage the datasets without rights to be redistributed | see https://github.com/huggingface/datasets-server/issues/12 for the ManualDownloadError.
More generally, all the endpoints should manage coherent information about the datasets that are explicitly not supported by the datasets server, due to their license. | Improve how we manage the datasets without rights to be redistributed: see https://github.com/huggingface/datasets-server/issues/12 for the ManualDownloadError.
More generally, all the endpoints should manage coherent information about the datasets that are explicitly not supported by the datasets server, due to their license. | closed | 2022-06-17T11:46:12Z | 2022-09-19T09:41:35Z | 2022-09-19T09:41:34Z | severo |
1,274,764,534 | Implement API pagination? | Should we add API pagination right now? Maybe useful for the "technical" endpoints like https://datasets-server.huggingface.co/queue-dump-waiting-started or https://datasets-server.huggingface.co/cache-reports
https://simonwillison.net/2021/Jul/1/pagnis/
| Implement API pagination?: Should we add API pagination right now? Maybe useful for the "technical" endpoints like https://datasets-server.huggingface.co/queue-dump-waiting-started or https://datasets-server.huggingface.co/cache-reports
https://simonwillison.net/2021/Jul/1/pagnis/
| closed | 2022-06-17T08:54:41Z | 2022-08-01T19:02:00Z | 2022-08-01T19:02:00Z | severo |
1,274,750,159 | Examine the recommendations by Mongo Atlas | The prod database is managed at https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#clusters/detail/datasets-server-prod
It provides recommendations, that might be useful to take into account. They are all listed in the Performance advisor: https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#metrics/replicaSet/6239e8ba6c32bf5c2d888fb5/advisor
Currently:
<img width="1567" alt="Capture d’écran 2022-06-17 à 10 39 13" src="https://user-images.githubusercontent.com/1676121/174261254-7419c2ed-f31d-4f95-8d6a-3037408f9a4b.png">
It proposes to create one index and drop two (see #392).
It also remarks that the [datasets_server_cache.splits](https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#metrics/replicaSet/6239e8ba6c32bf5c2d888fb5/explorer/datasets_server_cache/splits) collection contains documents larger than 2MB, which "can result in excess cache pressure, especially if a small portion of them are being updated or queried.". For the latter, note that we had implemented more granular collections, but then stepped back: see https://github.com/huggingface/datasets-server/pull/202 | Examine the recommendations by Mongo Atlas: The prod database is managed at https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#clusters/detail/datasets-server-prod
It provides recommendations, that might be useful to take into account. They are all listed in the Performance advisor: https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#metrics/replicaSet/6239e8ba6c32bf5c2d888fb5/advisor
Currently:
<img width="1567" alt="Capture d’écran 2022-06-17 à 10 39 13" src="https://user-images.githubusercontent.com/1676121/174261254-7419c2ed-f31d-4f95-8d6a-3037408f9a4b.png">
It proposes to create one index and drop two (see #392).
It also remarks that the [datasets_server_cache.splits](https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#metrics/replicaSet/6239e8ba6c32bf5c2d888fb5/explorer/datasets_server_cache/splits) collection contains documents larger than 2MB, which "can result in excess cache pressure, especially if a small portion of them are being updated or queried.". For the latter, note that we had implemented more granular collections, but then stepped back: see https://github.com/huggingface/datasets-server/pull/202 | closed | 2022-06-17T08:42:45Z | 2022-09-19T09:01:02Z | 2022-09-19T09:01:02Z | severo |
1,274,739,083 | Create and drop indexes following mongodb atlas recommendations | https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#metrics/replicaSet/6239e8ba6c32bf5c2d888fb5/advisor/createIndexes
Recommends to create an index on datasets_server_queue.split_jobs
```
status: 1
created_at: 1
```
And to drop three redundant indexes:
- datasets_server_queue.split_jobs
```
dataset_name: 1
config_name: 1
split_name: 1
```
- datasets_server_queue.dataset_jobs
```
dataset_name: 1
```
- datasets_server_cache.splits
```
status: 1
```
Note that I already manually deleted these three indexes though Mongo Atlas, and that they are not defined in the code explicitly:
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L91
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L120-L124
- https://github.com/huggingface/datasets-server/blob/main/libs/libcache/src/libcache/cache.py#L118-L122
But they were created again, which means that in some way `mongorestore` must be the one that creates them automatically. Possibly because they correspond to the primary key:
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L110-L111
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L147-L148
No idea why an index for "status" is created in datasets_server_cache.splits:
- https://github.com/huggingface/datasets-server/blob/main/libs/libcache/src/libcache/cache.py#L92) | Create and drop indexes following mongodb atlas recommendations: https://cloud.mongodb.com/v2/6239e2417155de3d798e9187#metrics/replicaSet/6239e8ba6c32bf5c2d888fb5/advisor/createIndexes
Recommends to create an index on datasets_server_queue.split_jobs
```
status: 1
created_at: 1
```
And to drop three redundant indexes:
- datasets_server_queue.split_jobs
```
dataset_name: 1
config_name: 1
split_name: 1
```
- datasets_server_queue.dataset_jobs
```
dataset_name: 1
```
- datasets_server_cache.splits
```
status: 1
```
Note that I already manually deleted these three indexes though Mongo Atlas, and that they are not defined in the code explicitly:
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L91
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L120-L124
- https://github.com/huggingface/datasets-server/blob/main/libs/libcache/src/libcache/cache.py#L118-L122
But they were created again, which means that in some way `mongorestore` must be the one that creates them automatically. Possibly because they correspond to the primary key:
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L110-L111
- https://github.com/huggingface/datasets-server/blob/main/libs/libqueue/src/libqueue/queue.py#L147-L148
No idea why an index for "status" is created in datasets_server_cache.splits:
- https://github.com/huggingface/datasets-server/blob/main/libs/libcache/src/libcache/cache.py#L92) | closed | 2022-06-17T08:31:51Z | 2022-09-19T09:03:11Z | 2022-09-19T09:03:11Z | severo |
1,274,721,215 | Create a script that refreshes entries with a specific error | The script https://github.com/huggingface/datasets-server/blob/main/services/admin/src/admin/scripts/refresh_cache.py refreshes all the public HF datasets. It might be useful if we suspect the webhooks to have failed in some way.
But the most common case is when we want to trigger a refresh on a subset of the datasets/splits. It might be: on all the EMPTY or STALLED or ERROR datasets/splits.
Even more, we might want to refresh only datasets/splits that have a specific error message, possibly after a bug fix in datasets-server or in the upstream datasets library.
Currently, I do it by:
- going to https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading,
- selecting the specific error
<img width="442" alt="Capture d’écran 2022-06-17 à 10 13 56" src="https://user-images.githubusercontent.com/1676121/174256844-bf4ba6c2-212f-4f63-bd5a-8596998ece14.png">
- copy/pasting the code to add the datasets to the queue
<img width="1188" alt="Capture d’écran 2022-06-17 à 10 14 06" src="https://user-images.githubusercontent.com/1676121/174256847-14fd23ca-33a7-4738-9291-b30fd0186a41.png">
| Create a script that refreshes entries with a specific error: The script https://github.com/huggingface/datasets-server/blob/main/services/admin/src/admin/scripts/refresh_cache.py refreshes all the public HF datasets. It might be useful if we suspect the webhooks to have failed in some way.
But the most common case is when we want to trigger a refresh on a subset of the datasets/splits. It might be: on all the EMPTY or STALLED or ERROR datasets/splits.
Even more, we might want to refresh only datasets/splits that have a specific error message, possibly after a bug fix in datasets-server or in the upstream datasets library.
Currently, I do it by:
- going to https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading,
- selecting the specific error
<img width="442" alt="Capture d’écran 2022-06-17 à 10 13 56" src="https://user-images.githubusercontent.com/1676121/174256844-bf4ba6c2-212f-4f63-bd5a-8596998ece14.png">
- copy/pasting the code to add the datasets to the queue
<img width="1188" alt="Capture d’écran 2022-06-17 à 10 14 06" src="https://user-images.githubusercontent.com/1676121/174256847-14fd23ca-33a7-4738-9291-b30fd0186a41.png">
| closed | 2022-06-17T08:15:08Z | 2022-09-16T17:35:01Z | 2022-09-16T17:35:01Z | severo |
1,274,710,387 | How to best manage the datasets that we cannot process due to RAM? | The dataset worker pod is killed (OOMKilled) for:
```
bigscience/P3
Graphcore/gqa-lxmert
echarlaix/gqa-lxmert
```
and the split worker pod is killed (OOMKilled) for:
```
imthanhlv/binhvq_news21_raw / started / train
openclimatefix/nimrod-uk-1km / sample / train/test/validation
PolyAI/minds14 / zh-CN / train
```
With the current jobs management (https://github.com/huggingface/datasets-server/issues/264) the killed jobs remain marked as "STARTED" in the mongo db. If we "cancel" them with
```
kubectl exec datasets-server-prod-admin-79798989fb-scmjw -- make cancel-started-dataset-jobs
kubectl exec datasets-server-prod-admin-79798989fb-scmjw -- make cancel-started-split-jobs
```
they are re-enqueue with the status "WAITING" until they are processed and killed again.
Possibly we should allow up to 3 attempts, for example, maybe increasing the dedicated RAM (see https://github.com/huggingface/datasets-server/issues/264#issuecomment-1158596143). Even so, we cannot have more RAM than the underlying node (eg: 32 GiB on the current nodes) and some datasets will still fail.
In that case, we should mark them as ERROR with a proper error message. | How to best manage the datasets that we cannot process due to RAM?: The dataset worker pod is killed (OOMKilled) for:
```
bigscience/P3
Graphcore/gqa-lxmert
echarlaix/gqa-lxmert
```
and the split worker pod is killed (OOMKilled) for:
```
imthanhlv/binhvq_news21_raw / started / train
openclimatefix/nimrod-uk-1km / sample / train/test/validation
PolyAI/minds14 / zh-CN / train
```
With the current jobs management (https://github.com/huggingface/datasets-server/issues/264) the killed jobs remain marked as "STARTED" in the mongo db. If we "cancel" them with
```
kubectl exec datasets-server-prod-admin-79798989fb-scmjw -- make cancel-started-dataset-jobs
kubectl exec datasets-server-prod-admin-79798989fb-scmjw -- make cancel-started-split-jobs
```
they are re-enqueue with the status "WAITING" until they are processed and killed again.
Possibly we should allow up to 3 attempts, for example, maybe increasing the dedicated RAM (see https://github.com/huggingface/datasets-server/issues/264#issuecomment-1158596143). Even so, we cannot have more RAM than the underlying node (eg: 32 GiB on the current nodes) and some datasets will still fail.
In that case, we should mark them as ERROR with a proper error message. | closed | 2022-06-17T08:04:45Z | 2022-09-19T09:42:36Z | 2022-09-19T09:42:36Z | severo |
1,274,084,416 | A simple repository with a CSV generates an error | See https://huggingface.slack.com/archives/C0311GZ7R6K/p1655411775639289
https://huggingface.co/datasets/osanseviero/top-hits-spotify/tree/main
<img width="1535" alt="Capture d’écran 2022-06-16 à 23 09 10" src="https://user-images.githubusercontent.com/1676121/174163986-8e355d08-82f3-4dc5-9f62-602cb87f5a87.png">
<img width="423" alt="Capture d’écran 2022-06-16 à 23 09 00" src="https://user-images.githubusercontent.com/1676121/174163992-8904e2f6-05f8-4b93-ba98-31d1c87a9808.png">
```
Message: Couldn't find a dataset script at /src/services/worker/osanseviero/top-hits-spotify/top-hits-spotify.py or any data file in the same directory. Couldn't find 'osanseviero/top-hits-spotify' on the Hugging Face Hub either: FileNotFoundError: The dataset repository at 'osanseviero/top-hits-spotify' doesn't contain any data file
``` | A simple repository with a CSV generates an error: See https://huggingface.slack.com/archives/C0311GZ7R6K/p1655411775639289
https://huggingface.co/datasets/osanseviero/top-hits-spotify/tree/main
<img width="1535" alt="Capture d’écran 2022-06-16 à 23 09 10" src="https://user-images.githubusercontent.com/1676121/174163986-8e355d08-82f3-4dc5-9f62-602cb87f5a87.png">
<img width="423" alt="Capture d’écran 2022-06-16 à 23 09 00" src="https://user-images.githubusercontent.com/1676121/174163992-8904e2f6-05f8-4b93-ba98-31d1c87a9808.png">
```
Message: Couldn't find a dataset script at /src/services/worker/osanseviero/top-hits-spotify/top-hits-spotify.py or any data file in the same directory. Couldn't find 'osanseviero/top-hits-spotify' on the Hugging Face Hub either: FileNotFoundError: The dataset repository at 'osanseviero/top-hits-spotify' doesn't contain any data file
``` | closed | 2022-06-16T21:09:27Z | 2022-06-20T16:27:24Z | 2022-06-20T16:27:12Z | severo |
1,274,010,979 | what happened to the pods? | ```
$ k get pods -w
...
datasets-server-prod-datasets-worker-776b774978-g7mpk 1/1 Evicted 0 73m │DEBUG: 2022-06-16 18:42:46,966 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pending 0 1s │DEBUG: 2022-06-16 18:42:47,011 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e43 for split 'test' from dataset 'luozhou
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pending 0 1s │yang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 OutOfmemory 0 1s │INFO: 2022-06-16 18:42:47,012 - datasets_server.worker - compute split 'test' from dataset 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.85MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 Pending 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.43MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 OutOfmemory 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 5.07MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 Pending 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.18MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.52MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 OutOfmemory 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.76MB/s]
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 Pending 0 0s │Downloading and preparing dataset dureader/robust (download: 19.57 MiB, generated: 57.84 MiB, post-processed: Unknown size, total: 77.4
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 Pending 0 0s │1 MiB) to /cache/datasets/luozhouyang___dureader/robust/1.0.0/bdab4855e88c197f2297db78cfc86259fb874c2b977134bbe80d3af8616f33b1...
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 OutOfmemory 0 0s │Downloading data: 1%| | 163k/20.5M [01:45<3:40:25, 1.54kB/s]
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 Pending 0 0s │DEBUG: 2022-06-16 18:44:44,235 - datasets_server.worker - job finished with error: 62ab6804a502851c834d7e43 for split 'test' from datas
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 Pending 0 0s │et 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 OutOfmemory 0 0s │DEBUG: 2022-06-16 18:44:44,236 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 Pending 0 0s │DEBUG: 2022-06-16 18:44:44,281 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e45 for split 'test' from dataset 'opencli
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 Pending 0 0s │matefix/nimrod-uk-1km' with config 'sample'
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 OutOfmemory 0 0s │INFO: 2022-06-16 18:44:44,281 - datasets_server.worker - compute split 'test' from dataset 'openclimatefix/nimrod-uk-1km' with config '
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 Pending 0 0s │sample'
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 6.04MB/s]
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 OutOfmemory 0 1s │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 7.65MB/s]
datasets-server-prod-datasets-worker-776b774978-x7xzb 0/1 Pending 0 0s │2022-06-16 18:44:46.305062: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication b
datasets-server-prod-datasets-worker-776b774978-x7xzb 0/1 Pending 0 0s │earer token failed, returning an empty token. Retrieving token from files failed with "NOT_FOUND: Could not locate the credentials file
datasets-server-prod-datasets-worker-776b774978-x7xzb 0/1 OutOfmemory 0 0s │.". Retrieving token from GCE failed with "FAILED_PRECONDITION: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resol
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Pending 0 0s │ve host name', error details: Could not resolve host: metadata".
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Pending 0 0s │2022-06-16 18:44:46.389820: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Init:0/3 0 0s │1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
datasets-server-prod-datasets-worker-776b774978-g7mpk 0/1 Error 0 73m │2022-06-16 18:44:46.389865: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Init:1/3 0 3s │2022-06-16 18:44:46.390005: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on t
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Init:2/3 0 4s
``` | what happened to the pods?: ```
$ k get pods -w
...
datasets-server-prod-datasets-worker-776b774978-g7mpk 1/1 Evicted 0 73m │DEBUG: 2022-06-16 18:42:46,966 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pending 0 1s │DEBUG: 2022-06-16 18:42:47,011 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e43 for split 'test' from dataset 'luozhou
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pending 0 1s │yang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 OutOfmemory 0 1s │INFO: 2022-06-16 18:42:47,012 - datasets_server.worker - compute split 'test' from dataset 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.85MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 Pending 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.43MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 OutOfmemory 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 5.07MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 Pending 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.18MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.52MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 OutOfmemory 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.76MB/s]
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 Pending 0 0s │Downloading and preparing dataset dureader/robust (download: 19.57 MiB, generated: 57.84 MiB, post-processed: Unknown size, total: 77.4
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 Pending 0 0s │1 MiB) to /cache/datasets/luozhouyang___dureader/robust/1.0.0/bdab4855e88c197f2297db78cfc86259fb874c2b977134bbe80d3af8616f33b1...
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 OutOfmemory 0 0s │Downloading data: 1%| | 163k/20.5M [01:45<3:40:25, 1.54kB/s]
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 Pending 0 0s │DEBUG: 2022-06-16 18:44:44,235 - datasets_server.worker - job finished with error: 62ab6804a502851c834d7e43 for split 'test' from datas
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 Pending 0 0s │et 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 OutOfmemory 0 0s │DEBUG: 2022-06-16 18:44:44,236 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 Pending 0 0s │DEBUG: 2022-06-16 18:44:44,281 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e45 for split 'test' from dataset 'opencli
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 Pending 0 0s │matefix/nimrod-uk-1km' with config 'sample'
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 OutOfmemory 0 0s │INFO: 2022-06-16 18:44:44,281 - datasets_server.worker - compute split 'test' from dataset 'openclimatefix/nimrod-uk-1km' with config '
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 Pending 0 0s │sample'
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 6.04MB/s]
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 OutOfmemory 0 1s │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 7.65MB/s]
datasets-server-prod-datasets-worker-776b774978-x7xzb 0/1 Pending 0 0s │2022-06-16 18:44:46.305062: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication b
datasets-server-prod-datasets-worker-776b774978-x7xzb 0/1 Pending 0 0s │earer token failed, returning an empty token. Retrieving token from files failed with "NOT_FOUND: Could not locate the credentials file
datasets-server-prod-datasets-worker-776b774978-x7xzb 0/1 OutOfmemory 0 0s │.". Retrieving token from GCE failed with "FAILED_PRECONDITION: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resol
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Pending 0 0s │ve host name', error details: Could not resolve host: metadata".
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Pending 0 0s │2022-06-16 18:44:46.389820: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Init:0/3 0 0s │1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
datasets-server-prod-datasets-worker-776b774978-g7mpk 0/1 Error 0 73m │2022-06-16 18:44:46.389865: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Init:1/3 0 3s │2022-06-16 18:44:46.390005: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on t
datasets-server-prod-datasets-worker-776b774978-m5dqs 0/1 Init:2/3 0 4s
``` | closed | 2022-06-16T19:46:00Z | 2022-06-17T07:48:20Z | 2022-06-17T07:45:53Z | severo |
1,273,919,314 | two jobs picked at the same time? | ```
INFO: 2022-06-16 18:07:12,662 - datasets_server.worker - compute dataset 'classla/hr500k'
Downloading builder script: 100%|██████████| 13.4k/13.4k [00:00<00:00, 9.64MB/s]
started job DatasetJob[classla/hr500k] has a not the STARTED status (success). Force finishing anyway.
started job DatasetJob[classla/hr500k] has a non-empty finished_at field. Force finishing anyway.
CRITICAL: 2022-06-16 18:07:14,319 - datasets_server.worker - quit due to an uncaught error while processing the job: 2 or more items returned, instead of 1
Traceback (most recent call last):
File "/src/services/worker/src/worker/main.py", line 200, in <module>
loop()
File "/src/services/worker/src/worker/main.py", line 185, in loop
if has_resources() and process_next_job():
File "/src/services/worker/src/worker/main.py", line 137, in process_next_job
return process_next_dataset_job()
File "/src/services/worker/src/worker/main.py", line 63, in process_next_dataset_job
add_split_job(
File "/src/services/worker/.venv/lib/python3.9/site-packages/libqueue/queue.py", line 193, in add_split_job
add_job(
File "/src/services/worker/.venv/lib/python3.9/site-packages/libqueue/queue.py", line 179, in add_job
existing_jobs.filter(status__in=[Status.WAITING, Status.STARTED]).get()
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 281, in get
raise queryset._document.MultipleObjectsReturned(
libqueue.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1
make: *** [Makefile:19: run] Error 1
``` | two jobs picked at the same time?: ```
INFO: 2022-06-16 18:07:12,662 - datasets_server.worker - compute dataset 'classla/hr500k'
Downloading builder script: 100%|██████████| 13.4k/13.4k [00:00<00:00, 9.64MB/s]
started job DatasetJob[classla/hr500k] has a not the STARTED status (success). Force finishing anyway.
started job DatasetJob[classla/hr500k] has a non-empty finished_at field. Force finishing anyway.
CRITICAL: 2022-06-16 18:07:14,319 - datasets_server.worker - quit due to an uncaught error while processing the job: 2 or more items returned, instead of 1
Traceback (most recent call last):
File "/src/services/worker/src/worker/main.py", line 200, in <module>
loop()
File "/src/services/worker/src/worker/main.py", line 185, in loop
if has_resources() and process_next_job():
File "/src/services/worker/src/worker/main.py", line 137, in process_next_job
return process_next_dataset_job()
File "/src/services/worker/src/worker/main.py", line 63, in process_next_dataset_job
add_split_job(
File "/src/services/worker/.venv/lib/python3.9/site-packages/libqueue/queue.py", line 193, in add_split_job
add_job(
File "/src/services/worker/.venv/lib/python3.9/site-packages/libqueue/queue.py", line 179, in add_job
existing_jobs.filter(status__in=[Status.WAITING, Status.STARTED]).get()
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 281, in get
raise queryset._document.MultipleObjectsReturned(
libqueue.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1
make: *** [Makefile:19: run] Error 1
``` | closed | 2022-06-16T18:09:45Z | 2022-09-16T17:37:06Z | 2022-09-16T17:37:06Z | severo |
1,273,917,683 | duplicate keys in the mongo database | ```
mongoengine.errors.NotUniqueError: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_server_cache.splits index: dataset_name_1_config_name_1_split_name_1 dup key: { dataset_name: "csebuetnlp/xlsum", config_name: "chinese_traditional", split_name: "test" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1, 'config_name': 1, 'split_name': 1}, 'keyValue': {'dataset_name': 'csebuetnlp/xlsum', 'config_name': 'chinese_traditional', 'split_name': 'test'}, 'errmsg': 'E11000 duplicate key error collection: datasets_server_cache.splits index: dataset_name_1_config_name_1_split_name_1 dup key: { dataset_name: "csebuetnlp/xlsum", config_name: "chinese_traditional", split_name: "test" }'})
``` | duplicate keys in the mongo database: ```
mongoengine.errors.NotUniqueError: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_server_cache.splits index: dataset_name_1_config_name_1_split_name_1 dup key: { dataset_name: "csebuetnlp/xlsum", config_name: "chinese_traditional", split_name: "test" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1, 'config_name': 1, 'split_name': 1}, 'keyValue': {'dataset_name': 'csebuetnlp/xlsum', 'config_name': 'chinese_traditional', 'split_name': 'test'}, 'errmsg': 'E11000 duplicate key error collection: datasets_server_cache.splits index: dataset_name_1_config_name_1_split_name_1 dup key: { dataset_name: "csebuetnlp/xlsum", config_name: "chinese_traditional", split_name: "test" }'})
``` | closed | 2022-06-16T18:08:16Z | 2022-09-16T17:37:10Z | 2022-09-16T17:37:10Z | severo |
1,273,881,870 | feat: use new cache locations (to have empty ones) | null | feat: use new cache locations (to have empty ones): | closed | 2022-06-16T17:35:23Z | 2022-06-16T17:39:47Z | 2022-06-16T17:39:46Z | severo |
1,273,758,971 | write rights on storage | I get errors with the split worker:
```
PermissionError: [Errno 13] Permission denied: '/assets/nateraw/huggingpics-data-2/--/nateraw--huggingpics-data-2/train/0/image/image.jpg'
```
| write rights on storage: I get errors with the split worker:
```
PermissionError: [Errno 13] Permission denied: '/assets/nateraw/huggingpics-data-2/--/nateraw--huggingpics-data-2/train/0/image/image.jpg'
```
| closed | 2022-06-16T15:52:32Z | 2022-09-16T17:37:24Z | 2022-09-16T17:37:23Z | severo |
1,273,701,235 | feat: 🎸 adjust the prod resources | see https://github.com/huggingface/infra/pull/239/files. | feat: 🎸 adjust the prod resources: see https://github.com/huggingface/infra/pull/239/files. | closed | 2022-06-16T15:12:56Z | 2022-06-16T15:41:11Z | 2022-06-16T15:41:11Z | severo |
1,273,550,477 | don't chmod the storage on every pod start | It's too long and unneeded: we don't want to do it at every start.
https://github.com/huggingface/datasets-server/blob/main/infra/charts/datasets-server/templates/_initContainerCache.tpl
| don't chmod the storage on every pod start: It's too long and unneeded: we don't want to do it at every start.
https://github.com/huggingface/datasets-server/blob/main/infra/charts/datasets-server/templates/_initContainerCache.tpl
| closed | 2022-06-16T13:14:45Z | 2022-09-16T17:37:29Z | 2022-09-16T17:37:29Z | severo |
1,273,317,155 | feat: 🎸 upgrade datasets (and dependencies) | see https://github.com/huggingface/datasets/releases/tag/2.3.2 | feat: 🎸 upgrade datasets (and dependencies): see https://github.com/huggingface/datasets/releases/tag/2.3.2 | closed | 2022-06-16T09:43:44Z | 2022-06-16T12:36:06Z | 2022-06-16T12:36:06Z | severo |
1,273,215,482 | Making repo private and public again makes dataset preview unavailable | I get a Server error "Unauthorized"
Slack discussion https://huggingface.slack.com/archives/C0311GZ7R6K/p1655366487928519 | Making repo private and public again makes dataset preview unavailable: I get a Server error "Unauthorized"
Slack discussion https://huggingface.slack.com/archives/C0311GZ7R6K/p1655366487928519 | closed | 2022-06-16T08:14:30Z | 2022-10-25T17:42:08Z | 2022-10-25T17:42:08Z | osanseviero |
1,272,398,413 | Ensure the code coverage is reported as expected to codecov | See https://codecov.io/github/huggingface/datasets-preview-backend/ | Ensure the code coverage is reported as expected to codecov: See https://codecov.io/github/huggingface/datasets-preview-backend/ | closed | 2022-06-15T15:23:41Z | 2022-09-19T09:04:08Z | 2022-09-19T09:04:08Z | severo |
1,272,395,266 | Use fixtures in unit tests | Don't use any more real HF datasets in the unit tests of the services and use fixtures instead. Move them to the e2e tests when possible | Use fixtures in unit tests: Don't use any more real HF datasets in the unit tests of the services and use fixtures instead. Move them to the e2e tests when possible | closed | 2022-06-15T15:21:13Z | 2022-08-24T16:26:43Z | 2022-08-24T16:26:42Z | severo |
1,272,269,916 | fix: 🐛 fix the log name | null | fix: 🐛 fix the log name: | closed | 2022-06-15T13:50:15Z | 2022-06-15T13:50:22Z | 2022-06-15T13:50:21Z | severo |
1,272,203,037 | Provide statistics on a split column | - continuous:
- distribution / histogram
- mean
- median
- standard deviation
- range (min/max)
- discrete:
- list of values (with frequency) if not too many
- number of unique values | Provide statistics on a split column: - continuous:
- distribution / histogram
- mean
- median
- standard deviation
- range (min/max)
- discrete:
- list of values (with frequency) if not too many
- number of unique values | closed | 2022-06-15T13:01:54Z | 2023-08-11T12:26:16Z | 2023-08-11T12:26:15Z | severo |
1,271,804,269 | feat: 🎸 upgrade datasets to 2.3.1 | null | feat: 🎸 upgrade datasets to 2.3.1: | closed | 2022-06-15T07:38:48Z | 2022-06-15T13:52:15Z | 2022-06-15T13:44:00Z | severo |
1,270,995,864 | Add timestamp type | Replaces #371 that I accidentally closed instead of merging | Add timestamp type: Replaces #371 that I accidentally closed instead of merging | closed | 2022-06-14T15:37:58Z | 2022-06-14T15:38:09Z | 2022-06-14T15:38:08Z | severo |
1,270,912,382 | Add support for building GitHub Codespace dev environment | Add support for building a GitHub Codespace dev environment (as it was done for the [moon landing](https://github.com/huggingface/moon-landing/pull/3188) project) to make it easier to contribute to the project. | Add support for building GitHub Codespace dev environment: Add support for building a GitHub Codespace dev environment (as it was done for the [moon landing](https://github.com/huggingface/moon-landing/pull/3188) project) to make it easier to contribute to the project. | closed | 2022-06-14T14:37:58Z | 2022-09-19T09:05:26Z | 2022-09-19T09:05:25Z | mariosasko |
1,269,422,159 | Fix dockerfiles | null | Fix dockerfiles: | closed | 2022-06-13T13:15:03Z | 2022-06-13T16:26:16Z | 2022-06-13T16:26:15Z | severo |
1,267,408,267 | Add Timestamp to the list of supported types | See https://github.com/huggingface/datasets/issues/4413 and
https://github.com/huggingface/datasets-server/issues/86#issuecomment-1152253277 | Add Timestamp to the list of supported types: See https://github.com/huggingface/datasets/issues/4413 and
https://github.com/huggingface/datasets-server/issues/86#issuecomment-1152253277 | closed | 2022-06-10T11:17:24Z | 2022-06-14T15:38:36Z | 2022-06-14T15:37:09Z | severo |
1,267,330,362 | docs: ✏️ add doc about k8 | null | docs: ✏️ add doc about k8: | closed | 2022-06-10T10:01:00Z | 2022-06-10T10:01:42Z | 2022-06-10T10:01:05Z | severo |
1,267,318,913 | reverse-proxy is not reloaded if nginx template is modified | If the [nginx template config](https://github.com/huggingface/datasets-server/blob/e64150b2a8e5b21cc1c01dd04bb4e397ffb25ab3/infra/charts/datasets-server/nginx-templates/default.conf.template) is modified, it's correctly mounted in the pod, you can check with:
```
$ k exec -it datasets-server-prod-reverse-proxy-64478c4f6b-fhjsx -- sh
# more /etc/nginx/templates/default.conf.template
```
But *I don't think* the nginx process is reloaded, and I don't see a way to easily check if it is or not (https://serverfault.com/a/361465/363977 is too hardcore, and gdb is not installed on the pod) | reverse-proxy is not reloaded if nginx template is modified: If the [nginx template config](https://github.com/huggingface/datasets-server/blob/e64150b2a8e5b21cc1c01dd04bb4e397ffb25ab3/infra/charts/datasets-server/nginx-templates/default.conf.template) is modified, it's correctly mounted in the pod, you can check with:
```
$ k exec -it datasets-server-prod-reverse-proxy-64478c4f6b-fhjsx -- sh
# more /etc/nginx/templates/default.conf.template
```
But *I don't think* the nginx process is reloaded, and I don't see a way to easily check if it is or not (https://serverfault.com/a/361465/363977 is too hardcore, and gdb is not installed on the pod) | closed | 2022-06-10T09:50:03Z | 2022-09-19T09:06:24Z | 2022-09-19T09:06:23Z | severo |
1,267,293,209 | fix stalled / stale | I think I've used both stalled and stale. The correct term is "stale" cache entries. | fix stalled / stale: I think I've used both stalled and stale. The correct term is "stale" cache entries. | closed | 2022-06-10T09:26:20Z | 2022-06-22T06:53:49Z | 2022-06-22T06:48:29Z | severo |
1,267,278,585 | e2e tests: ensure the infrastructure is ready before launching the tests | See https://github.com/huggingface/datasets-server/pull/366#issuecomment-1152150929
> Possibly we have to improve the control of the startup with docker compose: https://docs.docker.com/compose/startup-order/.
> And we should not start the tests before all the infrastructure is ready (db, API, workers) | e2e tests: ensure the infrastructure is ready before launching the tests: See https://github.com/huggingface/datasets-server/pull/366#issuecomment-1152150929
> Possibly we have to improve the control of the startup with docker compose: https://docs.docker.com/compose/startup-order/.
> And we should not start the tests before all the infrastructure is ready (db, API, workers) | closed | 2022-06-10T09:13:04Z | 2022-07-28T17:41:06Z | 2022-07-28T17:41:06Z | severo |