html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
⌀ | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/2329 | Add cache dir for in-memory datasets | Hi,
I'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR. | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | 29 | text: Add cache dir for in-memory datasets
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322
Hi,
I'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR. |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | For optimal storage, it would be better to:
- store only the audio file path in the cache Arrow file
- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function) | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 49 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
For optimal storage, it would be better to:
- store only the audio file path in the cache Arrow file
- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function) |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | Just one step before having a first running example to benchmark.
Decision to make: how to call the function `dataset.features.decode_example`:
- The usual approach until now in speech applications: call it in a subsequent `.map` function
- Pros: multiprocessing can be used out of the box
- Cons: large disk storage required for caching decoded audio files, although having it cached will enhance speed for further usage
- Approach suggested by @lhoestq (see above: https://github.com/huggingface/datasets/pull/2324#discussion_r660758683): doing it in formatting
- Pros: no large disk storage required, as it will be done on the fly while iterating on the dataset
- Cons: it is not cached; need to implement multiprocessing for this case
- Other pros/cons for the previous options?
- Other options?
cc: @lhoestq @patrickvonplaten @anton-l | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 126 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
Just one step before having a first running example to benchmark.
Decision to make: how to call the function `dataset.features.decode_example`:
- The usual approach until now in speech applications: call it in a subsequent `.map` function
- Pros: multiprocessing can be used out of the box
- Cons: large disk storage required for caching decoded audio files, although having it cached will enhance speed for further usage
- Approach suggested by @lhoestq (see above: https://github.com/huggingface/datasets/pull/2324#discussion_r660758683): doing it in formatting
- Pros: no large disk storage required, as it will be done on the fly while iterating on the dataset
- Cons: it is not cached; need to implement multiprocessing for this case
- Other pros/cons for the previous options?
- Other options?
cc: @lhoestq @patrickvonplaten @anton-l |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | @albertvillanova I'm in two minds about this, to be honest. For example, if we consider CommonVoice, which is encoded in lossy `mp3`:
- If we decompress `mp3` into raw `wav` arrays, loading a batch will speed up about 40x.
- However, a 60gb English mp3 dataset will blow up to about 600gb raw (iirc), which is why loading on-the-fly (optionally?) could be very beneficial as well. | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 66 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
@albertvillanova I'm in two minds about this, to be honest. For example, if we consider CommonVoice, which is encoded in lossy `mp3`:
- If we decompress `mp3` into raw `wav` arrays, loading a batch will speed up about 40x.
- However, a 60gb English mp3 dataset will blow up to about 600gb raw (iirc), which is why loading on-the-fly (optionally?) could be very beneficial as well. |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | Users can do the conversion from mp3 to wav by themselves if they want to using `map`.
IMO it's better if we can keep the decoding part with the minimal features to be both easy to understand and flexible, i.e. just having the on-the-fly decoding of the audio data (with the sampling rate parameter)
Decompressing from mp3 to wav sounds like an optimization that depends on the problem that the user wants to solve, the constrains from its environment (disk space, IO speed), and other parameters (optimal training speed for example). Therefore I would leave this to the user to decide whether it has to do it or not.
Let me know what you think about this | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 117 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
Users can do the conversion from mp3 to wav by themselves if they want to using `map`.
IMO it's better if we can keep the decoding part with the minimal features to be both easy to understand and flexible, i.e. just having the on-the-fly decoding of the audio data (with the sampling rate parameter)
Decompressing from mp3 to wav sounds like an optimization that depends on the problem that the user wants to solve, the constrains from its environment (disk space, IO speed), and other parameters (optimal training speed for example). Therefore I would leave this to the user to decide whether it has to do it or not.
Let me know what you think about this |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | @albertvillanova, In my opinion the pros strongly outweigh the cons in the @lhoestq's suggestion which is why I think we should go forward with it.
The cons:
- "the operation won't be cached" is not to important as the user will most likely access just a couple of audio array to see how it looks like and then for the "full" feature extraction she/he will make use of `.map(...)` anyways which means that the result will be cached.
- Regarding the multi-processing - if I understand correctly it'll follow the same logic here -> the user will only access some audio arrays for testing playing around with the model but use `.map(...)` for larger operations where multi-processing would still work as before.
The advantages mostly solve the main poinpoints being:
- exploding disk space
- bad user experience since the audio is not loaded on the go
=> So I'm very much in favor of the "direct-access" feature | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 158 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
@albertvillanova, In my opinion the pros strongly outweigh the cons in the @lhoestq's suggestion which is why I think we should go forward with it.
The cons:
- "the operation won't be cached" is not to important as the user will most likely access just a couple of audio array to see how it looks like and then for the "full" feature extraction she/he will make use of `.map(...)` anyways which means that the result will be cached.
- Regarding the multi-processing - if I understand correctly it'll follow the same logic here -> the user will only access some audio arrays for testing playing around with the model but use `.map(...)` for larger operations where multi-processing would still work as before.
The advantages mostly solve the main poinpoints being:
- exploding disk space
- bad user experience since the audio is not loaded on the go
=> So I'm very much in favor of the "direct-access" feature |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | Update: I've retaken this issue.
If the decoding logic is implemented when "examples are accessed", then if afterwards we use the `.map`, it tries to apply the decoding twice (as maps iterates over the examples, thus "accessing them", before trying to apply the map function)...
I'm thinking on some other approach... | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 51 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
Update: I've retaken this issue.
If the decoding logic is implemented when "examples are accessed", then if afterwards we use the `.map`, it tries to apply the decoding twice (as maps iterates over the examples, thus "accessing them", before trying to apply the map function)...
I'm thinking on some other approach... |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | I have reimplemented the previous approach, so that we can discuss about it: examples are decoded when accessed. | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 18 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
I have reimplemented the previous approach, so that we can discuss about it: examples are decoded when accessed. |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | What about creating a new specific formatting, just for decoding? This would be only active within a context manager. | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 19 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
What about creating a new specific formatting, just for decoding? This would be only active within a context manager. |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | Hi @lhoestq, as we discussed, I've followed your suggestion of implementing the decoding step within the formatting logic: extract-decode-format. Feel free to tell me what you think.
@patrickvonplaten and @anton-l, could you have a look at the use case in the test (https://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R34-R50) and tell me if this is aligned with your needs? Thanks. | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 54 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
Hi @lhoestq, as we discussed, I've followed your suggestion of implementing the decoding step within the formatting logic: extract-decode-format. Feel free to tell me what you think.
@patrickvonplaten and @anton-l, could you have a look at the use case in the test (https://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R34-R50) and tell me if this is aligned with your needs? Thanks. |
https://github.com/huggingface/datasets/pull/2324 | Create Audio feature | Hi @lhoestq, if you validate this approach, we could merge the Audio feature this (or early next) week. | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. | 18 | text: Create Audio feature
Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
Hi @lhoestq, if you validate this approach, we could merge the Audio feature this (or early next) week. |
https://github.com/huggingface/datasets/pull/2315 | Datasets cli improvements | Additionally, I've deleted the points that are not very relevant for this repo (I guess the deleted points originate from the transformers repo). With this change, running `datasets-cli` is identical to copy-pasting the code from `bug_report.md`, but is more elegant because it doesn't require launching the REPL and copy-pasting the code. | This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
| 51 | text: Datasets cli improvements
This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
Additionally, I've deleted the points that are not very relevant for this repo (I guess the deleted points originate from the transformers repo). With this change, running `datasets-cli` is identical to copy-pasting the code from `bug_report.md`, but is more elegant because it doesn't require launching the REPL and copy-pasting the code. |
https://github.com/huggingface/datasets/pull/2314 | Minor refactor prepare_module | @lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`. | Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings | 21 | text: Minor refactor prepare_module
Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings
@lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`. |
https://github.com/huggingface/datasets/pull/2311 | Add SLR52, SLR53 and SLR54 to OpenSLR | Hi @lhoestq , I am not sure about the error message:
```
#!/bin/bash -eo pipefail
./scripts/datasets_metadata_validator.py
WARNING:root:❌ Failed to validate 'datasets/openslr/README.md':
__init__() got an unexpected keyword argument 'SLR32'
INFO:root:❌ Failed on 1 files.
Exited with code exit status 1
CircleCI received exit code 1
```
Could you have a look please? Thanks. | Add large speech datasets for Sinhala, Bengali and Nepali. | 52 | text: Add SLR52, SLR53 and SLR54 to OpenSLR
Add large speech datasets for Sinhala, Bengali and Nepali.
Hi @lhoestq , I am not sure about the error message:
```
#!/bin/bash -eo pipefail
./scripts/datasets_metadata_validator.py
WARNING:root:❌ Failed to validate 'datasets/openslr/README.md':
__init__() got an unexpected keyword argument 'SLR32'
INFO:root:❌ Failed on 1 files.
Exited with code exit status 1
CircleCI received exit code 1
```
Could you have a look please? Thanks. |
https://github.com/huggingface/datasets/pull/2311 | Add SLR52, SLR53 and SLR54 to OpenSLR | Hi ! The error is unrelated to your PR and has been fixed on master
Next time feel free to merge master into your branch to fix the CI error ;) | Add large speech datasets for Sinhala, Bengali and Nepali. | 31 | text: Add SLR52, SLR53 and SLR54 to OpenSLR
Add large speech datasets for Sinhala, Bengali and Nepali.
Hi ! The error is unrelated to your PR and has been fixed on master
Next time feel free to merge master into your branch to fix the CI error ;) |
https://github.com/huggingface/datasets/pull/2310 | Update README.md | Hi @cryoff, thanks for completing the dataset card.
Now there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:
```
Found fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}
``` | Provides description of data instances and dataset features | 53 | text: Update README.md
Provides description of data instances and dataset features
Hi @cryoff, thanks for completing the dataset card.
Now there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:
```
Found fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}
``` |
https://github.com/huggingface/datasets/pull/2302 | Add SubjQA dataset | I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and 🤞 ? | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | 30 | text: Add SubjQA dataset
Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and 🤞 ? |
https://github.com/huggingface/datasets/pull/2302 | Add SubjQA dataset | Hi @lewtun, thanks for adding this dataset!
If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.
Here's a link to the [relevant section of the guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md#dataset-creation), let me know if you have any questions! | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | 70 | text: Add SubjQA dataset
Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
Hi @lewtun, thanks for adding this dataset!
If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.
Here's a link to the [relevant section of the guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md#dataset-creation), let me know if you have any questions! |
https://github.com/huggingface/datasets/pull/2302 | Add SubjQA dataset | > If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.
great idea @yjernite! i've added some extra information / moved things as you suggest and will wrap up the rest tomorrow :) | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | 68 | text: Add SubjQA dataset
Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
> If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.
great idea @yjernite! i've added some extra information / moved things as you suggest and will wrap up the rest tomorrow :) |
https://github.com/huggingface/datasets/pull/2302 | Add SubjQA dataset | hi @yjernite and @lhoestq, i've fleshed out the dataset card and think this is now ready for another round of review! | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | 21 | text: Add SubjQA dataset
Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
hi @yjernite and @lhoestq, i've fleshed out the dataset card and think this is now ready for another round of review! |
https://github.com/huggingface/datasets/pull/2295 | Create ExtractManager | Hi @lhoestq,
Once that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.
Thanks. | Perform refactoring to decouple extract functionality. | 40 | text: Create ExtractManager
Perform refactoring to decouple extract functionality.
Hi @lhoestq,
Once that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.
Thanks. |
https://github.com/huggingface/datasets/pull/2290 | Bbaw egyptian | Hi @phiwi,
Thanks for contributing this nice dataset. If you have any blocking problem or question, do not hesitate to ask here. We are pleased to help you.
Could you please first synchronize with our master branch? From your branch `bbaw_egyptian`, type:
```
git fetch upstream master
git merge upstream/master
``` | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | 51 | text: Bbaw egyptian
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-)
Hi @phiwi,
Thanks for contributing this nice dataset. If you have any blocking problem or question, do not hesitate to ask here. We are pleased to help you.
Could you please first synchronize with our master branch? From your branch `bbaw_egyptian`, type:
```
git fetch upstream master
git merge upstream/master
``` |
https://github.com/huggingface/datasets/pull/2290 | Bbaw egyptian | Thanks ! Can you check that you have `black==21.4b0` and run `make style` again ? This should fix the "check_code_quality" CI issue | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | 22 | text: Bbaw egyptian
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-)
Thanks ! Can you check that you have `black==21.4b0` and run `make style` again ? This should fix the "check_code_quality" CI issue |
https://github.com/huggingface/datasets/pull/2290 | Bbaw egyptian | Hi @phiwi, there are still some minor problems in relation with the tags you used in the dataset card (README.md).
Here you can find the output of the metadata validator:
```
WARNING:root:❌ Failed to validate 'datasets/bbaw_egyptian/README.md':
Could not validate the metada, found the following errors:
* field 'size_categories':
['100K<n<1000K'] are not registered tags for 'size_categories', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/size_categories.json
* field 'task_ids':
['machine translation'] are not registered tags for 'task_ids', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/tasks.json
* field 'languages':
['eg'] are not registered tags for 'languages', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/languages.json
``` | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | 86 | text: Bbaw egyptian
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-)
Hi @phiwi, there are still some minor problems in relation with the tags you used in the dataset card (README.md).
Here you can find the output of the metadata validator:
```
WARNING:root:❌ Failed to validate 'datasets/bbaw_egyptian/README.md':
Could not validate the metada, found the following errors:
* field 'size_categories':
['100K<n<1000K'] are not registered tags for 'size_categories', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/size_categories.json
* field 'task_ids':
['machine translation'] are not registered tags for 'task_ids', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/tasks.json
* field 'languages':
['eg'] are not registered tags for 'languages', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/languages.json
``` |
https://github.com/huggingface/datasets/pull/2290 | Bbaw egyptian | Thanks, @phiwi. Now all tests should pass green.
However, I think there is still an issue with the language code:
- the code for the Ancient Egyptian is not `ar-EG`
- there is no ISO 639-1 code for the Ancient Egyptian
- there is an ISO 639-2 code: `egy`; but this code will not pass the validation test because it is not in the list of valid codes
I am not sure what to do in this case... Maybe @lhoestq has an idea? Maybe adding the code to the list? https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | 91 | text: Bbaw egyptian
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-)
Thanks, @phiwi. Now all tests should pass green.
However, I think there is still an issue with the language code:
- the code for the Ancient Egyptian is not `ar-EG`
- there is no ISO 639-1 code for the Ancient Egyptian
- there is an ISO 639-2 code: `egy`; but this code will not pass the validation test because it is not in the list of valid codes
I am not sure what to do in this case... Maybe @lhoestq has an idea? Maybe adding the code to the list? https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json |
https://github.com/huggingface/datasets/pull/2290 | Bbaw egyptian | I have just checked that in the [list of valid codes](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json) there are already ISO 639-2 codes. Therefore, I would suggest you to add it to the list:
```
"egy": "Egyptian (Ancient)",
```
and change it in the dataset card. | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | 40 | text: Bbaw egyptian
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-)
I have just checked that in the [list of valid codes](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json) there are already ISO 639-2 codes. Therefore, I would suggest you to add it to the list:
```
"egy": "Egyptian (Ancient)",
```
and change it in the dataset card. |
https://github.com/huggingface/datasets/pull/2289 | Allow collaborators to self-assign issues | What do you think, @lhoestq? 😉
I think this could be another step to facilitate community contributions. | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | 17 | text: Allow collaborators to self-assign issues
Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`.
What do you think, @lhoestq? 😉
I think this could be another step to facilitate community contributions. |
https://github.com/huggingface/datasets/pull/2289 | Allow collaborators to self-assign issues | @lhoestq, it doesn't exist in `transformers`... I picked the idea from `scikit-learn`, where I have previously collaborated...
And sure, this must be documented! I just wanted first to know your opinion... | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | 31 | text: Allow collaborators to self-assign issues
Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`.
@lhoestq, it doesn't exist in `transformers`... I picked the idea from `scikit-learn`, where I have previously collaborated...
And sure, this must be documented! I just wanted first to know your opinion... |
https://github.com/huggingface/datasets/pull/2287 | Avoid copying table's record batches | Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests
I'm closing this one in favor of #2291 if you don't mind.
Thanks again ! | Fixes #2276 | 33 | text: Avoid copying table's record batches
Fixes #2276
Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests
I'm closing this one in favor of #2291 if you don't mind.
Thanks again ! |
https://github.com/huggingface/datasets/pull/2280 | Fixed typo seperate->separate | Hi ! Thanks for the fix :)
The CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.
We'll wait for #2286 to be merged to master first if you don't mind | 38 | text: Fixed typo seperate->separate
Hi ! Thanks for the fix :)
The CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.
We'll wait for #2286 to be merged to master first if you don't mind |
|
https://github.com/huggingface/datasets/pull/2280 | Fixed typo seperate->separate | The PR has been merged ! Feel free to merge master into your branch to fix the CI | 18 | text: Fixed typo seperate->separate
The PR has been merged ! Feel free to merge master into your branch to fix the CI |
|
https://github.com/huggingface/datasets/pull/2270 | Fix iterable interface expected by numpy | It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e
Basically #2246 broke querying an index with a simple iterable.
With the fix, it's again possible to use iterables and we can keep RandIter as it is.
Closing since the fix is already on master | Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`. | 42 | text: Fix iterable interface expected by numpy
Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.
It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e
Basically #2246 broke querying an index with a simple iterable.
With the fix, it's again possible to use iterables and we can keep RandIter as it is.
Closing since the fix is already on master |
https://github.com/huggingface/datasets/pull/2268 | Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers | @lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 systems. | This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue | 27 | text: Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers
This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue
@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 systems. |
https://github.com/huggingface/datasets/pull/2266 | Make tests run faster | Sorry I didn't know you were also working on it ^^'
And yes I 100% agree with you on the points you mentioned. We should definitely improve the coverage. It would be nice to have a clearer separation to know which tests in the suite are unit tests and which ones are integration tests
| From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).
Let me know what you think
Finally in another PR we can also separate in two circleci jobs:
- the tests of the code code of the lib
- the tests of the all the dataset/metric scripts. | 54 | text: Make tests run faster
From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).
Let me know what you think
Finally in another PR we can also separate in two circleci jobs:
- the tests of the code code of the lib
- the tests of the all the dataset/metric scripts.
Sorry I didn't know you were also working on it ^^'
And yes I 100% agree with you on the points you mentioned. We should definitely improve the coverage. It would be nice to have a clearer separation to know which tests in the suite are unit tests and which ones are integration tests
|
https://github.com/huggingface/datasets/pull/2266 | Make tests run faster | Never mind: we both noticed tests can be improved. More PRs to come... 😉
According to the literature, unit tests are those that test a behavior unit, isolated from the other components and must be very fast: for me, this last requirement implies that they must be performed completely _in memory_.
As opposed, integration tests are those which also test interactions with _external_ components, like web services, databases, file system, etc.
The problem I see is that our code is still too coupled and it is difficult to isolate components for testing. Therefore, I would suggest acting iteratively, by refactoring to decouple components and then implement unit tests for each component in isolation. | From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).
Let me know what you think
Finally in another PR we can also separate in two circleci jobs:
- the tests of the code code of the lib
- the tests of the all the dataset/metric scripts. | 113 | text: Make tests run faster
From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).
Let me know what you think
Finally in another PR we can also separate in two circleci jobs:
- the tests of the code code of the lib
- the tests of the all the dataset/metric scripts.
Never mind: we both noticed tests can be improved. More PRs to come... 😉
According to the literature, unit tests are those that test a behavior unit, isolated from the other components and must be very fast: for me, this last requirement implies that they must be performed completely _in memory_.
As opposed, integration tests are those which also test interactions with _external_ components, like web services, databases, file system, etc.
The problem I see is that our code is still too coupled and it is difficult to isolate components for testing. Therefore, I would suggest acting iteratively, by refactoring to decouple components and then implement unit tests for each component in isolation. |
https://github.com/huggingface/datasets/pull/2264 | Fix memory issue in multiprocessing: Don't pickle table index | The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.
Therefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling and this is the issue. | The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap ! | 46 | text: Fix memory issue in multiprocessing: Don't pickle table index
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap !
The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.
Therefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling and this is the issue. |
https://github.com/huggingface/datasets/pull/2264 | Fix memory issue in multiprocessing: Don't pickle table index | I'm still investigating why we didn't catch this issue in the tests.
This test should have caught it but didn't:
https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/tests/test_table.py#L350-L353 | The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap ! | 21 | text: Fix memory issue in multiprocessing: Don't pickle table index
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap !
I'm still investigating why we didn't catch this issue in the tests.
This test should have caught it but didn't:
https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/tests/test_table.py#L350-L353 |
https://github.com/huggingface/datasets/pull/2264 | Fix memory issue in multiprocessing: Don't pickle table index | I'll focus on the patch release and fix the test in another PR after the release | The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap ! | 16 | text: Fix memory issue in multiprocessing: Don't pickle table index
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap !
I'll focus on the patch release and fix the test in another PR after the release |
https://github.com/huggingface/datasets/pull/2260 | GooAQ dataset added | Thanks for adding this one !
The download manager does support downloading files on git lfs via their github url. No need for a manual download option ;) | @lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`? | 28 | text: GooAQ dataset added
@lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`?
Thanks for adding this one !
The download manager does support downloading files on git lfs via their github url. No need for a manual download option ;) |
https://github.com/huggingface/datasets/pull/2259 | Add support for Split.ALL | Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:
```python
import datasets
datasets.load_dataset("sst", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError
```
EDIT:
Actually, think it's OK to merge this PR because the fix will not touch this PR's code. | The title says it all. | 53 | text: Add support for Split.ALL
The title says it all.
Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:
```python
import datasets
datasets.load_dataset("sst", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError
```
EDIT:
Actually, think it's OK to merge this PR because the fix will not touch this PR's code. |
https://github.com/huggingface/datasets/pull/2258 | Fix incorrect update_metadata_with_features calls in ArrowDataset | @lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future. | Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151) | 28 | text: Fix incorrect update_metadata_with_features calls in ArrowDataset
Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151)
@lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future. |
https://github.com/huggingface/datasets/pull/2257 | added metrics for CUAD | > For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
@bhavitvyamalik I guess the mentioned metrics are enough but it would be better if exact match is also added since the standard SQUAD dataset also has it. | For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here | 61 | text: added metrics for CUAD
For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
> For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
@bhavitvyamalik I guess the mentioned metrics are enough but it would be better if exact match is also added since the standard SQUAD dataset also has it. |
https://github.com/huggingface/datasets/pull/2257 | added metrics for CUAD | I would like to quote it from the website that I am following to learn
these things.
Exact Match:
This metric is as simple as it sounds. For each question+answer pair, if
the characters of the model's prediction exactly match the characters of
*(one
of) the True Answer(s)*, EM = 1, otherwise EM = 0. This is a strict
all-or-nothing metric; being off by a single character results in a score
of 0. When assessing against a negative example, if the model predicts any
text at all, it automatically receives a 0 for that example.
So, I guess you need to ensure at least 1 predicted answer matches for EM
to be 1.
Source:
https://qa.fastforwardlabs.com/no%20answer/null%20threshold/bert/distilbert/exact%20match/f1/robust%20predictions/2020/06/09/Evaluating_BERT_on_SQuAD.html
You can go to their homepage and read the other links. They have detailed
explanations on evaluation metrics. You can also have a look at the
squad_v2 metric file for further clarification.
Regards,
Mohammed Rakib
On Sun, 25 Apr 2021 at 15:20, Bhavitvya Malik ***@***.***>
wrote:
> I'm a little confused when it comes to 2 ground truths which can be a
> possible answer. Like here for eg.
>
> predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User:
> Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id':
> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply
> Agreement__Parties'}]
>
> references = [{'answers': {'answer_start': [143, 49], 'text': ['The
> seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co.,
> Ltd.']}, 'id':
> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply
> Agreement__Parties'}]
>
> Should I ensure at least 1 predicted answer matches or both predicted
> answers should match (like in this case) for EM to be 1?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/pull/2257#issuecomment-826289753>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHMYZAZSAEZNFWEMVAPK6M3TKPNHLANCNFSM43QFZVPQ>
> .
>
| For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here | 291 | text: added metrics for CUAD
For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
I would like to quote it from the website that I am following to learn
these things.
Exact Match:
This metric is as simple as it sounds. For each question+answer pair, if
the characters of the model's prediction exactly match the characters of
*(one
of) the True Answer(s)*, EM = 1, otherwise EM = 0. This is a strict
all-or-nothing metric; being off by a single character results in a score
of 0. When assessing against a negative example, if the model predicts any
text at all, it automatically receives a 0 for that example.
So, I guess you need to ensure at least 1 predicted answer matches for EM
to be 1.
Source:
https://qa.fastforwardlabs.com/no%20answer/null%20threshold/bert/distilbert/exact%20match/f1/robust%20predictions/2020/06/09/Evaluating_BERT_on_SQuAD.html
You can go to their homepage and read the other links. They have detailed
explanations on evaluation metrics. You can also have a look at the
squad_v2 metric file for further clarification.
Regards,
Mohammed Rakib
On Sun, 25 Apr 2021 at 15:20, Bhavitvya Malik ***@***.***>
wrote:
> I'm a little confused when it comes to 2 ground truths which can be a
> possible answer. Like here for eg.
>
> predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User:
> Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id':
> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply
> Agreement__Parties'}]
>
> references = [{'answers': {'answer_start': [143, 49], 'text': ['The
> seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co.,
> Ltd.']}, 'id':
> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply
> Agreement__Parties'}]
>
> Should I ensure at least 1 predicted answer matches or both predicted
> answers should match (like in this case) for EM to be 1?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/pull/2257#issuecomment-826289753>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHMYZAZSAEZNFWEMVAPK6M3TKPNHLANCNFSM43QFZVPQ>
> .
>
|
https://github.com/huggingface/datasets/pull/2257 | added metrics for CUAD | Updated the same @MohammedRakib! Even if a single answer matches I'm returning 1 in that case for EM (not traversing all predictions once we have one `exact_match` from prediction) | For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here | 29 | text: added metrics for CUAD
For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
Updated the same @MohammedRakib! Even if a single answer matches I'm returning 1 in that case for EM (not traversing all predictions once we have one `exact_match` from prediction) |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | Looks really nice so far, thanks !
Maybe if a dataset doesn't have a template for a specific task we could try the default template of this task ? | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 29 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
Looks really nice so far, thanks !
Maybe if a dataset doesn't have a template for a specific task we could try the default template of this task ? |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | hey @SBrandeis @lhoestq,
i now have a better idea about what you guys are trying to achieve with the task templates and have a few follow-up questions:
1. how did you envision using `DatasetInfo` for running evaluation? my understanding is that all `dataset_infos.json` files are stored in the `datasets` repo (unlike `transformers` where each model's weights etc are stored in a dedicated repo).
this suggests the following workflow:
```
- git clone datasets
- load target dataset to evaluate
- load `dataset_infos.json` for target dataset
- run eval for each task template in `task_templates`
- store metrics as evaluation cards (similar to what is done in `autonlp`)
```
2. assuming the above workflow, i see that the current `TaskTemplate` attributes of `task`, `input_schema`, and `label_schema` still require some wrangling from `dataset_infos.json` to reproduce additional mappings like `label2id` that we'd need for e.g. text classification. an alternative would be to instantiate the task template class directly from the JSON with something like
```python
from datasets.tasks import TextClassification
from transformers import AutoModelForSequenceClassification, AutoConfig
tc = TextClassification.from_json("path/to/dataset_infos.json")
# load a model with the desired config
model_ckpt = ...
config = AutoConfig.from_pretrained(model_ckpt, label2id=tc.label2id, id2label=tc.id2label)
model = AutoModelForSequenceClassification.from_pretrained(model_ckpt, config=config)
# run eval ...
```
perhaps this is what @SBrandeis had in mind with the `TaskTemplate.from_dict` method?
3. i personally prefer using `task_templates` over `supervised_keys` because it encourages the contributor to think in terms of 1 or more tasks. my question here is do we currently use `supervised_keys` for anything important in the `datasets` library? | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 249 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
hey @SBrandeis @lhoestq,
i now have a better idea about what you guys are trying to achieve with the task templates and have a few follow-up questions:
1. how did you envision using `DatasetInfo` for running evaluation? my understanding is that all `dataset_infos.json` files are stored in the `datasets` repo (unlike `transformers` where each model's weights etc are stored in a dedicated repo).
this suggests the following workflow:
```
- git clone datasets
- load target dataset to evaluate
- load `dataset_infos.json` for target dataset
- run eval for each task template in `task_templates`
- store metrics as evaluation cards (similar to what is done in `autonlp`)
```
2. assuming the above workflow, i see that the current `TaskTemplate` attributes of `task`, `input_schema`, and `label_schema` still require some wrangling from `dataset_infos.json` to reproduce additional mappings like `label2id` that we'd need for e.g. text classification. an alternative would be to instantiate the task template class directly from the JSON with something like
```python
from datasets.tasks import TextClassification
from transformers import AutoModelForSequenceClassification, AutoConfig
tc = TextClassification.from_json("path/to/dataset_infos.json")
# load a model with the desired config
model_ckpt = ...
config = AutoConfig.from_pretrained(model_ckpt, label2id=tc.label2id, id2label=tc.id2label)
model = AutoModelForSequenceClassification.from_pretrained(model_ckpt, config=config)
# run eval ...
```
perhaps this is what @SBrandeis had in mind with the `TaskTemplate.from_dict` method?
3. i personally prefer using `task_templates` over `supervised_keys` because it encourages the contributor to think in terms of 1 or more tasks. my question here is do we currently use `supervised_keys` for anything important in the `datasets` library? |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | 1. How do you envision using DatasetInfo for running evaluation?
The initial idea was to be able to do something like this:
```python
from datasets import load_dataset
dset = load_dataset("name", task="binary_classification")
# OR
dset = load_dataset("name")
dset = dset.prepare_for_task("binary_classification")
```
2. I don't think that's needed if we proceed as mentioned above
3. `supervised_keys` are mostly a legacy compatibility thing with TF datasets, not sure it's used for anything right now. I'll let @lhoestq give more details on that
[Edit 1] Typo | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 82 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
1. How do you envision using DatasetInfo for running evaluation?
The initial idea was to be able to do something like this:
```python
from datasets import load_dataset
dset = load_dataset("name", task="binary_classification")
# OR
dset = load_dataset("name")
dset = dset.prepare_for_task("binary_classification")
```
2. I don't think that's needed if we proceed as mentioned above
3. `supervised_keys` are mostly a legacy compatibility thing with TF datasets, not sure it's used for anything right now. I'll let @lhoestq give more details on that
[Edit 1] Typo |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | > The initial idea was to be able to do something like this:
>
> ```python
> from datasets import load_dataset
> dset = load_dataset("name", task="binary_classification")
> # OR
> dset = load_dataset("name")
> dset = dset.prepare_for_task("binary_classification")
> ```
ah that's very elegant! just so i've completely understood, the result would be that the relevant column names of `dset` would be mapped to e.g. `text` and `label` and thus we'd have a uniform schema for the evaluation of all `binary_classification` tasks? | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 81 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
> The initial idea was to be able to do something like this:
>
> ```python
> from datasets import load_dataset
> dset = load_dataset("name", task="binary_classification")
> # OR
> dset = load_dataset("name")
> dset = dset.prepare_for_task("binary_classification")
> ```
ah that's very elegant! just so i've completely understood, the result would be that the relevant column names of `dset` would be mapped to e.g. `text` and `label` and thus we'd have a uniform schema for the evaluation of all `binary_classification` tasks? |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | That's correct! Also, the features need to be appropriately casted
For a classification task for example, we would need to cast the datasets features to something like this:
```python
datasets.Features({
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=[...]),
})
```
| This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 36 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
That's correct! Also, the features need to be appropriately casted
For a classification task for example, we would need to cast the datasets features to something like this:
```python
datasets.Features({
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=[...]),
})
```
|
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | 3. We can ignore `supervised_keys` (it came from TFDS and we're not using it) and use `task_templates` | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 17 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
3. We can ignore `supervised_keys` (it came from TFDS and we're not using it) and use `task_templates` |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | great, thanks a lot for your answers! now it's much clearer what i need to do next 😃 | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 18 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
great, thanks a lot for your answers! now it's much clearer what i need to do next 😃 |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | hey @lhoestq @SBrandeis,
i've made some small tweaks to @SBrandeis's code so that `Dataset.prepare_for_task` is called in `DatasetBuilder`. using the `emotion` dataset as a test case, the following now works:
```python
# DatasetDict with default columns
ds = load_dataset("./datasets/emotion/")
# DatasetDict({
# train: Dataset({
# features: ['tweet', 'emotion'],
# num_rows: 16000
# })
# validation: Dataset({
# features: ['tweet', 'emotion'],
# num_rows: 2000
# })
# test: Dataset({
# features: ['tweet', 'emotion'],
# num_rows: 2000
# })
# })
# DatasetDict with remapped columns
ds = load_dataset("./datasets/emotion/", task="text_classification")
DatasetDict({
# train: Dataset({
# features: ['text', 'label'],
# num_rows: 16000
# })
# validation: Dataset({
# features: ['text', 'label'],
# num_rows: 2000
# })
# test: Dataset({
# features: ['text', 'label'],
# num_rows: 2000
# })
# })
# Dataset with default columns
ds = load_dataset("./datasets/emotion/", split="train")
# Map/cast features
ds = ds.prepare_for_task("text_classification")
# Dataset({
# features: ['text', 'label'],
# num_rows: 16000
# })
```
i have a few follow-up questions / remarks:
1. i'm working under the assumption that contributors / users only provide a unique set of task types. in particular, the current implementation does not support something like:
```python
task_templates=[TextClassification(labels=class_names, text_column="tweet", label_column="emotion"), TextClassification(labels=class_names, text_column="some_other_column", label_column="some_other_column")]
```
since we use `TaskTemplate.task` and the filter for compatible templates in `Dataset.prepare_for_task`. should we support these scenarios? my hunch is that this is rare in practice, but please correct me if i'm wrong.
2. when we eventually run evaluation for `transformers` models, i expect we'll be using the `Trainer` for which we can pass the standard label names to `TrainingArguments.label_names`. if that's the case, it might be prudent to heed the warning from the [docs](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainer#trainer) and use `labels` instead of `label` in the schema:
> your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named "label".
3. i plan to forge ahead on the rest of the pipeline taxonomy. please let me know if you'd prefer smaller, self-contained pull requests (e.g. one per task) | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 339 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
hey @lhoestq @SBrandeis,
i've made some small tweaks to @SBrandeis's code so that `Dataset.prepare_for_task` is called in `DatasetBuilder`. using the `emotion` dataset as a test case, the following now works:
```python
# DatasetDict with default columns
ds = load_dataset("./datasets/emotion/")
# DatasetDict({
# train: Dataset({
# features: ['tweet', 'emotion'],
# num_rows: 16000
# })
# validation: Dataset({
# features: ['tweet', 'emotion'],
# num_rows: 2000
# })
# test: Dataset({
# features: ['tweet', 'emotion'],
# num_rows: 2000
# })
# })
# DatasetDict with remapped columns
ds = load_dataset("./datasets/emotion/", task="text_classification")
DatasetDict({
# train: Dataset({
# features: ['text', 'label'],
# num_rows: 16000
# })
# validation: Dataset({
# features: ['text', 'label'],
# num_rows: 2000
# })
# test: Dataset({
# features: ['text', 'label'],
# num_rows: 2000
# })
# })
# Dataset with default columns
ds = load_dataset("./datasets/emotion/", split="train")
# Map/cast features
ds = ds.prepare_for_task("text_classification")
# Dataset({
# features: ['text', 'label'],
# num_rows: 16000
# })
```
i have a few follow-up questions / remarks:
1. i'm working under the assumption that contributors / users only provide a unique set of task types. in particular, the current implementation does not support something like:
```python
task_templates=[TextClassification(labels=class_names, text_column="tweet", label_column="emotion"), TextClassification(labels=class_names, text_column="some_other_column", label_column="some_other_column")]
```
since we use `TaskTemplate.task` and the filter for compatible templates in `Dataset.prepare_for_task`. should we support these scenarios? my hunch is that this is rare in practice, but please correct me if i'm wrong.
2. when we eventually run evaluation for `transformers` models, i expect we'll be using the `Trainer` for which we can pass the standard label names to `TrainingArguments.label_names`. if that's the case, it might be prudent to heed the warning from the [docs](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainer#trainer) and use `labels` instead of `label` in the schema:
> your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named "label".
3. i plan to forge ahead on the rest of the pipeline taxonomy. please let me know if you'd prefer smaller, self-contained pull requests (e.g. one per task) |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | hey @lhoestq @SBrandeis, i think this is ready for another review 😃
in addition to a few comments / questions i've left in the pr, here's a few remarks:
1. after some experimentation, i decided against allowing the user to specify nested column names for question-answering. i couldn't find a simple solution with the current api and suspect that i'd have to touch many areas of `datasets` to "unflatten" columns in a generic fashion.
2. in the current implementation, the user can specify the outer column name for question-answering, but is expected to follow the inner schema for e.g. `answers.text` and `answers.answer_start`. we can decide later how much flexibility we want to give users
3. i added a few unit tests
4. as discussed, let's keep this pr focused on text classification / question answering and i'll add the other tasks in separate prs
5. i renamed the tasks e.g. `text_classification` -> `text-classification` for consistency with the `Trainer` model cards [here](https://github.com/huggingface/transformers/pull/11599#pullrequestreview-656371007). | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 161 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
hey @lhoestq @SBrandeis, i think this is ready for another review 😃
in addition to a few comments / questions i've left in the pr, here's a few remarks:
1. after some experimentation, i decided against allowing the user to specify nested column names for question-answering. i couldn't find a simple solution with the current api and suspect that i'd have to touch many areas of `datasets` to "unflatten" columns in a generic fashion.
2. in the current implementation, the user can specify the outer column name for question-answering, but is expected to follow the inner schema for e.g. `answers.text` and `answers.answer_start`. we can decide later how much flexibility we want to give users
3. i added a few unit tests
4. as discussed, let's keep this pr focused on text classification / question answering and i'll add the other tasks in separate prs
5. i renamed the tasks e.g. `text_classification` -> `text-classification` for consistency with the `Trainer` model cards [here](https://github.com/huggingface/transformers/pull/11599#pullrequestreview-656371007). |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | > i'm not sure why the benchmarks are getting cancelled - is this expected?
Hmm I don't know. It's certainly unrelated to this PR though. Maybe github has some issues | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 30 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
> i'm not sure why the benchmarks are getting cancelled - is this expected?
Hmm I don't know. It's certainly unrelated to this PR though. Maybe github has some issues |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | hey @lhoestq and @SBrandeis, i've:
* extended the `prepare_for_task` API along the lines that @lhoestq suggested. i wasn't entirely sure what the `datasets` convention is for docstrings with mixed types, so please see if my proposal makes sense
* added a few new tests to check that we trigger the value errors on incorrect input
i think this is ready for another review :) | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 64 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
hey @lhoestq and @SBrandeis, i've:
* extended the `prepare_for_task` API along the lines that @lhoestq suggested. i wasn't entirely sure what the `datasets` convention is for docstrings with mixed types, so please see if my proposal makes sense
* added a few new tests to check that we trigger the value errors on incorrect input
i think this is ready for another review :) |
https://github.com/huggingface/datasets/pull/2255 | Task casting for text classification & question answering | > Looks all good thank you :)
>
> Can you also add `prepare_for_task` in the `main_classes.rst` file of the documentation ?
Done! I also remembered that I needed to do the same for `DatasetDict`, so included this as well :) | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | 41 | text: Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
```
> Looks all good thank you :)
>
> Can you also add `prepare_for_task` in the `main_classes.rst` file of the documentation ?
Done! I also remembered that I needed to do the same for `DatasetDict`, so included this as well :) |
https://github.com/huggingface/datasets/pull/2253 | Perform minor refactoring: use config | @lhoestq is there a problem in the master branch? I got a segmentation fault...
```
tests/test_table.py::test_concatenation_table_cast[in_memory] Fatal Python error: Segmentation fault
``` | Perform minor refactoring related to `config`. | 22 | text: Perform minor refactoring: use config
Perform minor refactoring related to `config`.
@lhoestq is there a problem in the master branch? I got a segmentation fault...
```
tests/test_table.py::test_concatenation_table_cast[in_memory] Fatal Python error: Segmentation fault
``` |
https://github.com/huggingface/datasets/pull/2249 | Allow downloading/processing/caching only specific splits | > If you pass a dictionary like this:
>
> ```
> {"main_metadata": url_to_main_data,
> "secondary_metadata": url_to_sec_data,
> "train": url_train_data,
> "test": url_test_data}
> ```
>
> then only the train or test keys will be kept, which I feel not intuitive.
>
> For example if the users asks to load the "train" split, then the main and secondary metadata won't be downloaded.
> You can fix that by keeping all the keys except the splits to ignore
Hi @lhoestq, I have been thinking about this and I think it is worth that we discuss about it.
When I created this PR, my first idea was to create a "hack" inside the download manager that will be able to filter some split(s) without touching any dataset script. Of course, the download manager does not know about splits logic, and thus this trick would only work for some very specific datasets: only the ones containing that pass a dict to the download manager containing only the keys "train", "validation", "test" (or the one passed by the user for advanced users knowing they can do it), e.g. the `natural_questions` dataset (which was one of the targets).
The big inconvenient of this approach is that it is not applicable to many datasets (or worse, it should be constantly tweaked to cope with exceptional cases). One exceptional case is the one you pointed out. But I see others:
- the split keys can be different: train, test, dev, val, validation, eval,...
- in `hope_edi` dataset, the split keys are: TRAIN_DOWNLOAD_URL, VALIDATION_DOWNLOAD_URL
- in `few_rel` dataset, the split keys are: train_wiki, val_nyt, val_pubmed,..., pid2name
- in `curiosity_dialogs`, the split keys are: train, val, test, test_zero; this means that for every split we pass, we will also get test_zero
- in `deal_or_no_dialog`, each of the splits URL is passed separately to the download manager, so all splits would be always downloaded
- etc.
Then after discussing, another idea emerged: pass a `split` parameter to `_split_generators`, which know about the splits logic, so that it can handle which splits are passed to the download manager. This approach is more accurate and can be tweaked so that it works with all the datasets we want. The only inconvenient is that then for every target dataset, we must modify its corresponding `_split_generators` script method.
My point is that I don't think it is a good idea to implement both approaches. They could even interfere with each other!
If you agree, I would implement ONLY the second one, which is simpler, more consistent and stable and will avoid future problems. | Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)
This PR makes several assumptions:
- `DownloadConfig` contains the configuration settings for downloading
- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading | 431 | text: Allow downloading/processing/caching only specific splits
Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)
This PR makes several assumptions:
- `DownloadConfig` contains the configuration settings for downloading
- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading
> If you pass a dictionary like this:
>
> ```
> {"main_metadata": url_to_main_data,
> "secondary_metadata": url_to_sec_data,
> "train": url_train_data,
> "test": url_test_data}
> ```
>
> then only the train or test keys will be kept, which I feel not intuitive.
>
> For example if the users asks to load the "train" split, then the main and secondary metadata won't be downloaded.
> You can fix that by keeping all the keys except the splits to ignore
Hi @lhoestq, I have been thinking about this and I think it is worth that we discuss about it.
When I created this PR, my first idea was to create a "hack" inside the download manager that will be able to filter some split(s) without touching any dataset script. Of course, the download manager does not know about splits logic, and thus this trick would only work for some very specific datasets: only the ones containing that pass a dict to the download manager containing only the keys "train", "validation", "test" (or the one passed by the user for advanced users knowing they can do it), e.g. the `natural_questions` dataset (which was one of the targets).
The big inconvenient of this approach is that it is not applicable to many datasets (or worse, it should be constantly tweaked to cope with exceptional cases). One exceptional case is the one you pointed out. But I see others:
- the split keys can be different: train, test, dev, val, validation, eval,...
- in `hope_edi` dataset, the split keys are: TRAIN_DOWNLOAD_URL, VALIDATION_DOWNLOAD_URL
- in `few_rel` dataset, the split keys are: train_wiki, val_nyt, val_pubmed,..., pid2name
- in `curiosity_dialogs`, the split keys are: train, val, test, test_zero; this means that for every split we pass, we will also get test_zero
- in `deal_or_no_dialog`, each of the splits URL is passed separately to the download manager, so all splits would be always downloaded
- etc.
Then after discussing, another idea emerged: pass a `split` parameter to `_split_generators`, which know about the splits logic, so that it can handle which splits are passed to the download manager. This approach is more accurate and can be tweaked so that it works with all the datasets we want. The only inconvenient is that then for every target dataset, we must modify its corresponding `_split_generators` script method.
My point is that I don't think it is a good idea to implement both approaches. They could even interfere with each other!
If you agree, I would implement ONLY the second one, which is simpler, more consistent and stable and will avoid future problems. |
https://github.com/huggingface/datasets/pull/2249 | Allow downloading/processing/caching only specific splits | Hi @albertvillanova !
Yup I agree with you, implementing the 2nd approach seems to be the right solution | Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)
This PR makes several assumptions:
- `DownloadConfig` contains the configuration settings for downloading
- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading | 18 | text: Allow downloading/processing/caching only specific splits
Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)
This PR makes several assumptions:
- `DownloadConfig` contains the configuration settings for downloading
- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading
Hi @albertvillanova !
Yup I agree with you, implementing the 2nd approach seems to be the right solution |
https://github.com/huggingface/datasets/pull/2246 | Faster map w/ input_columns & faster slicing w/ Iterable keys | @lhoestq Just fixed the code style issues— I think it should be good to merge now :) | @lhoestq Fixes #2193
- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set
- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices.
Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing. | 17 | text: Faster map w/ input_columns & faster slicing w/ Iterable keys
@lhoestq Fixes #2193
- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set
- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices.
Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing.
@lhoestq Just fixed the code style issues— I think it should be good to merge now :) |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | @lhoestq The tests for key type and duplicate keys have been added and verified successfully.
After generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:
```
Downloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\mnist\mnist\1.0.0\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...
0 examples [00:00, ? examples/s]2021-04-26 02:50:03.703836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
FAILURE TO GENERATE DATASET: Invalid key type detected
Found Key [0, 0] of type <class 'list'>
Keys should be either str, int or bytes type
```
In the case of duplicate keys, it now gives:
```
Downloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\mnist\mnist\1.0.0\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...
0 examples [00:00, ? examples/s]2021-04-26 02:53:13.498579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "f:\datasets\datasets-1\src\datasets\load.py", line 746, in load_dataset
builder_instance.download_and_prepare(
File "f:\datasets\datasets-1\src\datasets\builder.py", line 587, in download_and_prepare
self._download_and_prepare(
File "f:\datasets\datasets-1\src\datasets\builder.py", line 665, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "f:\datasets\datasets-1\src\datasets\builder.py", line 1002, in _prepare_split
writer.write(example, key)
File "f:\datasets\datasets-1\src\datasets\arrow_writer.py", line 321, in write
self.check_duplicates()
File "f:\datasets\datasets-1\src\datasets\arrow_writer.py", line 331, in check_duplicates
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 234467
Keys should be unique and deterministic in nature
```
Please let me know if this is what we wanted to implement. Thanks! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 221 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
@lhoestq The tests for key type and duplicate keys have been added and verified successfully.
After generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:
```
Downloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\mnist\mnist\1.0.0\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...
0 examples [00:00, ? examples/s]2021-04-26 02:50:03.703836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
FAILURE TO GENERATE DATASET: Invalid key type detected
Found Key [0, 0] of type <class 'list'>
Keys should be either str, int or bytes type
```
In the case of duplicate keys, it now gives:
```
Downloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\mnist\mnist\1.0.0\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...
0 examples [00:00, ? examples/s]2021-04-26 02:53:13.498579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "f:\datasets\datasets-1\src\datasets\load.py", line 746, in load_dataset
builder_instance.download_and_prepare(
File "f:\datasets\datasets-1\src\datasets\builder.py", line 587, in download_and_prepare
self._download_and_prepare(
File "f:\datasets\datasets-1\src\datasets\builder.py", line 665, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "f:\datasets\datasets-1\src\datasets\builder.py", line 1002, in _prepare_split
writer.write(example, key)
File "f:\datasets\datasets-1\src\datasets\arrow_writer.py", line 321, in write
self.check_duplicates()
File "f:\datasets\datasets-1\src\datasets\arrow_writer.py", line 331, in check_duplicates
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 234467
Keys should be unique and deterministic in nature
```
Please let me know if this is what we wanted to implement. Thanks! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | This looks pretty cool !
We can make focus on the GeneratorBasedBuilder for now yes.
Do you think we could make the ArrowWriter not look for duplicates by default ?
This way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now. | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 47 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
This looks pretty cool !
We can make focus on the GeneratorBasedBuilder for now yes.
Do you think we could make the ArrowWriter not look for duplicates by default ?
This way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now. |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | Thank you @lhoestq
> Do you think we could make the ArrowWriter not look for duplicates by default ?
We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.
However, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.
Nonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 85 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
Thank you @lhoestq
> Do you think we could make the ArrowWriter not look for duplicates by default ?
We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.
However, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.
Nonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.
This class is the main tool to write python data in arrow format so I'd expect it to be flexible.
That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.
An alternative would be to subclass the writer to include duplicates detection in another class.
Both options are fine for me, let me know what you think ! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 82 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.
This class is the main tool to write python data in arrow format so I'd expect it to be flexible.
That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.
An alternative would be to subclass the writer to include duplicates detection in another class.
Both options are fine for me, let me know what you think ! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | > This class is the main tool to write python data in arrow format so I'd expect it to be flexible.
> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.
Well, that makes sense as the writer can indeed be used for other purposes as well.
> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.
I think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`).
I will be adding the changes soon. Thanks for the feedback @lhoestq! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 117 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
> This class is the main tool to write python data in arrow format so I'd expect it to be flexible.
> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.
Well, that makes sense as the writer can indeed be used for other purposes as well.
> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.
I think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`).
I will be adding the changes soon. Thanks for the feedback @lhoestq! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | @lhoestq I have pushed the final changes just now.
Now, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)
Let me know if this is what was required. Thanks! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 42 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
@lhoestq I have pushed the final changes just now.
Now, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)
Let me know if this is what was required. Thanks! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | @lhoestq Thanks for the feedback! I will be adding the tests for the same very soon.
However, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 59 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
@lhoestq Thanks for the feedback! I will be adding the tests for the same very soon.
However, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | You can merge master into your branch to fix this issue.
Basically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).
So until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently. | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 40 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
You can merge master into your branch to fix this issue.
Basically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).
So until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently. |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | @lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.
Will be pushing the commit for the tests soon! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 35 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
@lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.
Will be pushing the commit for the tests soon! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.
I think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing).
These datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches.
I'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 137 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.
I think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing).
These datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches.
I'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | Hi ! Once https://github.com/huggingface/datasets/pull/2333 is merged, feel free to merge master into your branch to fix the CI :) | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 19 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
Hi ! Once https://github.com/huggingface/datasets/pull/2333 is merged, feel free to merge master into your branch to fix the CI :) |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :) | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 21 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :) |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :) | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 35 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :) |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | @lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :) | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 18 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
@lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :) |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks! | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 23 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks! |
https://github.com/huggingface/datasets/pull/2245 | Add `key` type and duplicates verification with hashing | Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch) | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | 19 | text: Add `key` type and duplicates verification with hashing
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch) |
https://github.com/huggingface/datasets/pull/2244 | Set specific cache directories per test function call | It looks like the `comet` metric test fails because it tries to load a model in memory.
In the tests I think we have `patch_comet` that mocks the model download + inference. Not sure why it didn't work though.
I can take a look tomorrow (this afternoon is the pytorch ecosystem day) | Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
| 52 | text: Set specific cache directories per test function call
Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
It looks like the `comet` metric test fails because it tries to load a model in memory.
In the tests I think we have `patch_comet` that mocks the model download + inference. Not sure why it didn't work though.
I can take a look tomorrow (this afternoon is the pytorch ecosystem day) |
https://github.com/huggingface/datasets/pull/2244 | Set specific cache directories per test function call | @lhoestq finally I did not find out why the mock is not used... If you can give me some other hint tomorrow... | Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
| 22 | text: Set specific cache directories per test function call
Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
@lhoestq finally I did not find out why the mock is not used... If you can give me some other hint tomorrow... |
https://github.com/huggingface/datasets/pull/2241 | Add SLR32 to OpenSLR | > And yet another one ! Thanks a lot :)
I just hope you don’t get fed up with openslr PR 😊 there are still few other datasets created by google in openslr that is not in hf dataset yet
| I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa | 40 | text: Add SLR32 to OpenSLR
I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa
> And yet another one ! Thanks a lot :)
I just hope you don’t get fed up with openslr PR 😊 there are still few other datasets created by google in openslr that is not in hf dataset yet
|
https://github.com/huggingface/datasets/pull/2228 | [WIP] Add ArrayXD support for fixed size list. | Awesome thanks ! To fix the CI you just need to merge master into your branch.
The error is unrelated to your PR | Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146
Since offset are not stored anymore, the file size is now roughly equal to the actual data size. | 23 | text: [WIP] Add ArrayXD support for fixed size list.
Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146
Since offset are not stored anymore, the file size is now roughly equal to the actual data size.
Awesome thanks ! To fix the CI you just need to merge master into your branch.
The error is unrelated to your PR |
https://github.com/huggingface/datasets/pull/2225 | fixed one instance of 'train' to 'test' | Thanks ! good catch
Could you also update the metadata of this dataset ?
You can do so by running
```
datasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications
```
This should update the dataset_infos.json file that contains the size of all the splits for example. | I believe this should be 'test' instead of 'train' | 44 | text: fixed one instance of 'train' to 'test'
I believe this should be 'test' instead of 'train'
Thanks ! good catch
Could you also update the metadata of this dataset ?
You can do so by running
```
datasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications
```
This should update the dataset_infos.json file that contains the size of all the splits for example. |
https://github.com/huggingface/datasets/pull/2223 | Set test cache config | > why a cache dir per test function does not work?
Probably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets/metrics modules.
If you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the python path during the test.
Indeed if the module cache hasn't been initialized, then it's added to the python path by calling `init_dynamic_modules`:
https://github.com/huggingface/datasets/blob/ba76012a19193a35053b9e20243ff40c2b4204ab/src/datasets/load.py#L291-L291 | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | 77 | text: Set test cache config
Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
> why a cache dir per test function does not work?
Probably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets/metrics modules.
If you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the python path during the test.
Indeed if the module cache hasn't been initialized, then it's added to the python path by calling `init_dynamic_modules`:
https://github.com/huggingface/datasets/blob/ba76012a19193a35053b9e20243ff40c2b4204ab/src/datasets/load.py#L291-L291 |
https://github.com/huggingface/datasets/pull/2223 | Set test cache config | @lhoestq, for the moment, this PR avoids populating the `~/.cache` dir during training, which is already an improvement, isn't it? | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | 20 | text: Set test cache config
Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
@lhoestq, for the moment, this PR avoids populating the `~/.cache` dir during training, which is already an improvement, isn't it? |
https://github.com/huggingface/datasets/pull/2223 | Set test cache config | Yes we can merge it this way if you're fine with it !
This is a good improvement | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | 18 | text: Set test cache config
Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
Yes we can merge it this way if you're fine with it !
This is a good improvement |
https://github.com/huggingface/datasets/pull/2223 | Set test cache config | I will eventually try to implement a `cache_dir` per test function in another PR, but I think I should first fix some side effects in tests: each test function should be atomic and able to have its own `cache_dir` without being affected by the `cache_dir` set in other test functions. | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | 50 | text: Set test cache config
Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
I will eventually try to implement a `cache_dir` per test function in another PR, but I think I should first fix some side effects in tests: each test function should be atomic and able to have its own `cache_dir` without being affected by the `cache_dir` set in other test functions. |
https://github.com/huggingface/datasets/pull/2222 | Fix too long WindowsFileLock name | Windows users should disable the max path length limit. It's a nightmare to handle it.
Also the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work. | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | 45 | text: Fix too long WindowsFileLock name
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
Windows users should disable the max path length limit. It's a nightmare to handle it.
Also the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work. |
https://github.com/huggingface/datasets/pull/2222 | Fix too long WindowsFileLock name | Do you agree with handling the case where MAX_PATH is not disabled? If not, we can close this PR.
If so, would it work a deterministic lock path instead of random? | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | 31 | text: Fix too long WindowsFileLock name
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
Do you agree with handling the case where MAX_PATH is not disabled? If not, we can close this PR.
If so, would it work a deterministic lock path instead of random? |
https://github.com/huggingface/datasets/pull/2222 | Fix too long WindowsFileLock name | I'd rather not handle this at all, since there will be other places in the code where the limit will break things | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | 22 | text: Fix too long WindowsFileLock name
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
I'd rather not handle this at all, since there will be other places in the code where the limit will break things |
https://github.com/huggingface/datasets/pull/2220 | Fix infinite loop in WindowsFileLock | How is it possible to get an infinite loop ? Can you add more details ? | Raise exception to avoid infinite loop. | 16 | text: Fix infinite loop in WindowsFileLock
Raise exception to avoid infinite loop.
How is it possible to get an infinite loop ? Can you add more details ? |
https://github.com/huggingface/datasets/pull/2220 | Fix infinite loop in WindowsFileLock | Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.
If other process has the file locked, then `PermissionError` is raised. In this case, `pass` is OK. | Raise exception to avoid infinite loop. | 45 | text: Fix infinite loop in WindowsFileLock
Raise exception to avoid infinite loop.
Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.
If other process has the file locked, then `PermissionError` is raised. In this case, `pass` is OK. |
https://github.com/huggingface/datasets/pull/2220 | Fix infinite loop in WindowsFileLock | Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:
https://github.com/benediktschmitt/py-filelock
Unless we have proper tests for this, I wouldn't recommend to change it | Raise exception to avoid infinite loop. | 39 | text: Fix infinite loop in WindowsFileLock
Raise exception to avoid infinite loop.
Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:
https://github.com/benediktschmitt/py-filelock
Unless we have proper tests for this, I wouldn't recommend to change it |
https://github.com/huggingface/datasets/pull/2220 | Fix infinite loop in WindowsFileLock | I'm pretty sure many things from the library could break for windows users that haven't disabled the max path length limit.
Maybe it would be simpler to simply raise an error on startup. For exampe, for windows users the error could ask them to disable the limit if it's not been disabled yet ? | Raise exception to avoid infinite loop. | 54 | text: Fix infinite loop in WindowsFileLock
Raise exception to avoid infinite loop.
I'm pretty sure many things from the library could break for windows users that haven't disabled the max path length limit.
Maybe it would be simpler to simply raise an error on startup. For exampe, for windows users the error could ask them to disable the limit if it's not been disabled yet ? |
https://github.com/huggingface/datasets/pull/2219 | Added CUAD dataset | 1) Changed the language in a few places apart from those you mentioned in README
2) Reduced the size of dummy data folder by removing all other entries except the first
3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | 56 | text: Added CUAD dataset
Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1).
1) Changed the language in a few places apart from those you mentioned in README
2) Reduced the size of dummy data folder by removing all other entries except the first
3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while |
https://github.com/huggingface/datasets/pull/2219 | Added CUAD dataset | @bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | 23 | text: Added CUAD dataset
Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1).
@bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? |
https://github.com/huggingface/datasets/pull/2215 | Add datasets SLR35 and SLR36 to OpenSLR | Hi @lhoestq,
Could you please help me, I got this error message in all "ci/circleci: run_dataset_script_tests_pyarrow*" tests:
```
...
"""Wrapper classes for various types of tokenization."""
from bleurt.lib import bert_tokenization
import tensorflow.compat.v1 as tf
> import sentencepiece as spm
E ModuleNotFoundError: No module named 'sentencepiece'
...
```
I am not sure why I do get it. Thanks.
| I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | 57 | text: Add datasets SLR35 and SLR36 to OpenSLR
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
Hi @lhoestq,
Could you please help me, I got this error message in all "ci/circleci: run_dataset_script_tests_pyarrow*" tests:
```
...
"""Wrapper classes for various types of tokenization."""
from bleurt.lib import bert_tokenization
import tensorflow.compat.v1 as tf
> import sentencepiece as spm
E ModuleNotFoundError: No module named 'sentencepiece'
...
```
I am not sure why I do get it. Thanks.
|
https://github.com/huggingface/datasets/pull/2215 | Add datasets SLR35 and SLR36 to OpenSLR | Hi ! This issue appeared on master since the last update of `BLEURT`.
I'm working on a fix. You can ignore this issue for this PR | I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | 26 | text: Add datasets SLR35 and SLR36 to OpenSLR
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
Hi ! This issue appeared on master since the last update of `BLEURT`.
I'm working on a fix. You can ignore this issue for this PR |
https://github.com/huggingface/datasets/pull/2215 | Add datasets SLR35 and SLR36 to OpenSLR | > Hi ! This issue appeared on master since the last update of `BLEURT`.
> I'm working on a fix. You can ignore this issue for this PR
Thanks for the info | I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | 32 | text: Add datasets SLR35 and SLR36 to OpenSLR
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
> Hi ! This issue appeared on master since the last update of `BLEURT`.
> I'm working on a fix. You can ignore this issue for this PR
Thanks for the info |
https://github.com/huggingface/datasets/pull/2203 | updated banking77 train and test data | Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ? | 31 | text: updated banking77 train and test data
Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ? |
|
https://github.com/huggingface/datasets/pull/2199 | Fix backward compatibility in Dataset.load_from_disk | What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?
This way future datasets can also be reloaded from older versions of the lib
`_indices_files` was introduced in a recent PR and was not released | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | 36 | text: Fix backward compatibility in Dataset.load_from_disk
Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195.
What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?
This way future datasets can also be reloaded from older versions of the lib
`_indices_files` was introduced in a recent PR and was not released |
https://github.com/huggingface/datasets/pull/2199 | Fix backward compatibility in Dataset.load_from_disk | Yes, I have seen it is not released yet...
You are right! It was your awesome PR on Tables which renamed this. If there is no particular reason for this renaming, yes, we could switch it back to the previous `_indices_data_files`. ;) | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | 42 | text: Fix backward compatibility in Dataset.load_from_disk
Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195.
Yes, I have seen it is not released yet...
You are right! It was your awesome PR on Tables which renamed this. If there is no particular reason for this renaming, yes, we could switch it back to the previous `_indices_data_files`. ;) |