html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
⌀ | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/390 | Concatenate datasets | Thanks for the suggestion! `dset1.concatenate(dset2)` does feel more natural. Although this seems to be a different "class" of transformation function than map() or filter(), acting on two datasets rather than on one. I would prefer the function signature treat both datasets symmetrically.
Python lists have `list1 + list2` or `list1.extend(list2)`.
NumPy has `np.concatenate((arr1, arr2))`.
Pandas has `pd.join((df1, df2))`.
PyTorch has `ConcatDataset((dset1, dset2))`.
Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work? | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | 105 | text: Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
```
Thanks for the suggestion! `dset1.concatenate(dset2)` does feel more natural. Although this seems to be a different "class" of transformation function than map() or filter(), acting on two datasets rather than on one. I would prefer the function signature treat both datasets symmetrically.
Python lists have `list1 + list2` or `list1.extend(list2)`.
NumPy has `np.concatenate((arr1, arr2))`.
Pandas has `pd.join((df1, df2))`.
PyTorch has `ConcatDataset((dset1, dset2))`.
Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work? |
https://github.com/huggingface/datasets/pull/390 | Concatenate datasets | The multi-task discussion is interesting, thanks for pointing me to that! I'll be focusing on T5 in a few weeks, so I'm sure I'll have many opinions then :). For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text. | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | 63 | text: Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
```
The multi-task discussion is interesting, thanks for pointing me to that! I'll be focusing on T5 in a few weeks, so I'm sure I'll have many opinions then :). For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text. |
https://github.com/huggingface/datasets/pull/390 | Concatenate datasets | > Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?
Yep I like this idea. Maybe `nlp.concatenate_datasets()` ?
> For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.
I agree :) | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | 90 | text: Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
```
> Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?
Yep I like this idea. Maybe `nlp.concatenate_datasets()` ?
> For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.
I agree :) |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | By the way, the reason this is an issue for me is because I want to be able to "save" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data.
Is pickling/unpickling the Dataset object the "sanctioned" way of doing this? Or is there a better way that I'm missing? | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 76 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
By the way, the reason this is an issue for me is because I want to be able to "save" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data.
Is pickling/unpickling the Dataset object the "sanctioned" way of doing this? Or is there a better way that I'm missing? |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | I've had success with saving datasets to disk via:
```python
cache_file = "/my/dset.cache"
dset = dset.map(whatever, cache_file_name=cache_file)
# then, later
dset = nlp.Dataset.from_file(cache_file)
```
This restores the dataset with all the attributes I need. | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 34 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
I've had success with saving datasets to disk via:
```python
cache_file = "/my/dset.cache"
dset = dset.map(whatever, cache_file_name=cache_file)
# then, later
dset = nlp.Dataset.from_file(cache_file)
```
This restores the dataset with all the attributes I need. |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | Thanks @jarednielsen, that makes sense. I'm a little wary of messing with the cache files, since I still don't really understand what's going on under the hood with Apache Arrow.
Related question: I'd like to do parallel pre-processing of the dataset. I know how to break the dataset up via sharding, but is there any way to combine the shards back together again once the processing is done? Right now I'm probably just going to iterate over each shard, write the contexts to a txt file, and then cat the txt files, but it feels like there ought to be a nicer way to concatenate datasets. | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 106 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
Thanks @jarednielsen, that makes sense. I'm a little wary of messing with the cache files, since I still don't really understand what's going on under the hood with Apache Arrow.
Related question: I'd like to do parallel pre-processing of the dataset. I know how to break the dataset up via sharding, but is there any way to combine the shards back together again once the processing is done? Right now I'm probably just going to iterate over each shard, write the contexts to a txt file, and then cat the txt files, but it feels like there ought to be a nicer way to concatenate datasets. |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | Haha, opened a PR for that functionality about an hour ago: https://github.com/huggingface/nlp/pull/390. Glad we're on the same page :) | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 19 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
Haha, opened a PR for that functionality about an hour ago: https://github.com/huggingface/nlp/pull/390. Glad we're on the same page :) |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | Datasets are not supposed to be pickled as pickle tries to put all the dataset in memory if I'm not wrong (and write all the data on disk).
The concatenate method however is a very cool feature, looking forward to having it merged :) | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 44 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
Datasets are not supposed to be pickled as pickle tries to put all the dataset in memory if I'm not wrong (and write all the data on disk).
The concatenate method however is a very cool feature, looking forward to having it merged :) |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | Ah, yes, you are correct. The pickle file contains the whole dataset, not just the cache names, which is not quite what I expected.
I tried adding a warning when pickling a Dataset, to prevent others like me from trying it. Interestingly, however, the warning is raised whenever any function on the dataset is called (select, shard, etc.).
```
import nlp
wiki = nlp.load_dataset('wikipedia', split='train')
wiki = wiki.shard(16, 0) # Triggers pickling of dataset
```
I believe this is because [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626), which gets the function signature, is actually pickling the whole dataset (and thereby serializing all the data to text). I checked by printing that string, and sure enough it was full of Wikipedia articles.
I don't think the whole pickling thing is worth the effort, so I'll close the PR. But I did want to mention this serialization behavior in case it's not intended. | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 146 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
Ah, yes, you are correct. The pickle file contains the whole dataset, not just the cache names, which is not quite what I expected.
I tried adding a warning when pickling a Dataset, to prevent others like me from trying it. Interestingly, however, the warning is raised whenever any function on the dataset is called (select, shard, etc.).
```
import nlp
wiki = nlp.load_dataset('wikipedia', split='train')
wiki = wiki.shard(16, 0) # Triggers pickling of dataset
```
I believe this is because [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626), which gets the function signature, is actually pickling the whole dataset (and thereby serializing all the data to text). I checked by printing that string, and sure enough it was full of Wikipedia articles.
I don't think the whole pickling thing is worth the effort, so I'll close the PR. But I did want to mention this serialization behavior in case it's not intended. |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | Keeping this open because I would like to keep brainstorming a bit on this.
One note on this is that we should have a clean serialization workflow, probably one that could serialize to a few formats (arrow, parquet and tfrecords come to mind). | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 43 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
Keeping this open because I would like to keep brainstorming a bit on this.
One note on this is that we should have a clean serialization workflow, probably one that could serialize to a few formats (arrow, parquet and tfrecords come to mind). |
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | This PR could be useful. My specific use case is `multiprocessing.Pool` for parallel preprocessing (because of the Python tokenization bottleneck at https://github.com/huggingface/transformers/issues/5729). I shard a large dataset, run map on each shard within a multiprocessing pool, and then concatenate them back together. This is only possible if a dataset can be pickled, otherwise the logic is much more complex. There's no reason to make it un-picklable, even if it's not the recommended usage.
```python
import nlp
import multiprocessing
def func(ex):
return {"text": "Prefix: " + ex["text"]}
def map_helper(dset):
return dset.map(func)
n_shards = 16
dset = nlp.load_dataset("wikitext-2-raw-v1", split="train")
with multiprocessing.Pool(processes=n_shards) as pool:
shards = pool.map(map_helper, [dset.shard(n_shards, i, contiguous=True) for i in range(n_shards)])
dset = nlp.concatenate_datasets(shards)
```
| It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 115 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
This PR could be useful. My specific use case is `multiprocessing.Pool` for parallel preprocessing (because of the Python tokenization bottleneck at https://github.com/huggingface/transformers/issues/5729). I shard a large dataset, run map on each shard within a multiprocessing pool, and then concatenate them back together. This is only possible if a dataset can be pickled, otherwise the logic is much more complex. There's no reason to make it un-picklable, even if it's not the recommended usage.
```python
import nlp
import multiprocessing
def func(ex):
return {"text": "Prefix: " + ex["text"]}
def map_helper(dset):
return dset.map(func)
n_shards = 16
dset = nlp.load_dataset("wikitext-2-raw-v1", split="train")
with multiprocessing.Pool(processes=n_shards) as pool:
shards = pool.map(map_helper, [dset.shard(n_shards, i, contiguous=True) for i in range(n_shards)])
dset = nlp.concatenate_datasets(shards)
```
|
https://github.com/huggingface/datasets/pull/389 | Fix pickling of SplitDict | Yes I agree.
#423 just got merged and should allow serialization of `SplitDict`. Could you try it and see if it'ok on your side now ? | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | 26 | text: Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
Yes I agree.
#423 just got merged and should allow serialization of `SplitDict`. Could you try it and see if it'ok on your side now ? |
https://github.com/huggingface/datasets/pull/386 | Update dataset loading and features - Add TREC dataset | I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)
Well actually it seems there are some merge conflicts to fix first | This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script.
- fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors.
- add the TREC-6 dataset | 37 | text: Update dataset loading and features - Add TREC dataset
This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script.
- fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors.
- add the TREC-6 dataset
I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)
Well actually it seems there are some merge conflicts to fix first |
https://github.com/huggingface/datasets/pull/385 | Remove unnecessary nested dict | We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | 19 | text: Remove unnecessary nested dict
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378
We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe |
https://github.com/huggingface/datasets/pull/385 | Remove unnecessary nested dict | @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.
```python
#!/usr/bin/env python3
from nlp import prepare_module, DownloadConfig, import_main_class, hf_api
import tempfile
def scan_for_nested_unnecessary_dict(dataset_name):
def load_builder_class(dataset_name):
module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))
return import_main_class(module_path)
def load_configs(dataset_name):
builder_cls = load_builder_class(dataset_name)
if len(builder_cls.BUILDER_CONFIGS) == 0:
return [None]
return builder_cls.BUILDER_CONFIGS
def scan_features_for_nested_dict(features):
is_sequence = False
if hasattr(features, "_type"):
if features._type != 'Sequence':
return False
else:
is_sequence = True
features = features.feature
if isinstance(features, list):
for value in features:
if scan_features_for_nested_dict(value):
return True
return False
elif isinstance(features, dict):
for key, value in features.items():
if is_sequence and len(features.keys()) == 1 and hasattr(features[key], "_type") and features[key]._type != "Sequence":
return True
if scan_features_for_nested_dict(value):
return True
return False
elif hasattr(features, "_type"):
return False
else:
raise ValueError(f"{features} should be either a list, a dict or a feature")
configs = load_configs(dataset_name)
for config in configs:
with tempfile.TemporaryDirectory() as processed_temp_dir:
# create config and dataset
dataset_builder_cls = load_builder_class(dataset_name)
name = config.name if config is not None else None
dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)
is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)
if is_nested_dict_in_dataset:
print(f"{dataset_name} with {name} needs refactoring")
if __name__ == "__main__":
scan_for_nested_unnecessary_dict("race") # prints True
scan_for_nested_unnecessary_dict("mlqa") # prints True
scan_for_nested_unnecessary_dict("squad") # prints Nothing
# ran the following lines for 1min and seems to work -> didn't check for all datasets though
# api = hf_api.HfApi()
# all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]
# for dataset in all_datasets:
# scan_for_nested_unnecessary_dict(dataset)
``` | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | 239 | text: Remove unnecessary nested dict
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378
@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.
```python
#!/usr/bin/env python3
from nlp import prepare_module, DownloadConfig, import_main_class, hf_api
import tempfile
def scan_for_nested_unnecessary_dict(dataset_name):
def load_builder_class(dataset_name):
module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))
return import_main_class(module_path)
def load_configs(dataset_name):
builder_cls = load_builder_class(dataset_name)
if len(builder_cls.BUILDER_CONFIGS) == 0:
return [None]
return builder_cls.BUILDER_CONFIGS
def scan_features_for_nested_dict(features):
is_sequence = False
if hasattr(features, "_type"):
if features._type != 'Sequence':
return False
else:
is_sequence = True
features = features.feature
if isinstance(features, list):
for value in features:
if scan_features_for_nested_dict(value):
return True
return False
elif isinstance(features, dict):
for key, value in features.items():
if is_sequence and len(features.keys()) == 1 and hasattr(features[key], "_type") and features[key]._type != "Sequence":
return True
if scan_features_for_nested_dict(value):
return True
return False
elif hasattr(features, "_type"):
return False
else:
raise ValueError(f"{features} should be either a list, a dict or a feature")
configs = load_configs(dataset_name)
for config in configs:
with tempfile.TemporaryDirectory() as processed_temp_dir:
# create config and dataset
dataset_builder_cls = load_builder_class(dataset_name)
name = config.name if config is not None else None
dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)
is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)
if is_nested_dict_in_dataset:
print(f"{dataset_name} with {name} needs refactoring")
if __name__ == "__main__":
scan_for_nested_unnecessary_dict("race") # prints True
scan_for_nested_unnecessary_dict("mlqa") # prints True
scan_for_nested_unnecessary_dict("squad") # prints Nothing
# ran the following lines for 1min and seems to work -> didn't check for all datasets though
# api = hf_api.HfApi()
# all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]
# for dataset in all_datasets:
# scan_for_nested_unnecessary_dict(dataset)
``` |
https://github.com/huggingface/datasets/pull/385 | Remove unnecessary nested dict | > @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.
>
> ```python
> #!/usr/bin/env python3
>
> from nlp import prepare_module, DownloadConfig, import_main_class, hf_api
> import tempfile
>
>
> def scan_for_nested_unnecessary_dict(dataset_name):
>
> def load_builder_class(dataset_name):
> module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))
> return import_main_class(module_path)
>
> def load_configs(dataset_name):
> builder_cls = load_builder_class(dataset_name)
> if len(builder_cls.BUILDER_CONFIGS) == 0:
> return [None]
> return builder_cls.BUILDER_CONFIGS
>
> def scan_features_for_nested_dict(features):
> is_sequence = False
> if hasattr(features, "_type"):
> if features._type != 'Sequence':
> return False
> else:
> is_sequence = True
> features = features.feature
>
> if isinstance(features, list):
> for value in features:
> if scan_features_for_nested_dict(value):
> return True
> return False
>
> elif isinstance(features, dict):
> for key, value in features.items():
> if is_sequence and len(features.keys()) == 1 and hasattr(features[key], "_type") and features[key]._type != "Sequence":
> return True
> if scan_features_for_nested_dict(value):
> return True
> return False
> else:
> raise ValueError(f"{features} should be either a list of a dict")
>
> configs = load_configs(dataset_name)
>
> for config in configs:
> with tempfile.TemporaryDirectory() as processed_temp_dir:
> # create config and dataset
> dataset_builder_cls = load_builder_class(dataset_name)
> name = config.name if config is not None else None
> dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)
>
> is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)
> if is_nested_dict_in_dataset:
> print(f"{dataset_name} with {name} needs refactoring")
>
>
> if __name__ == "__main__":
> scan_for_nested_unnecessary_dict("race") # prints True
> scan_for_nested_unnecessary_dict("mlqa") # prints True
> scan_for_nested_unnecessary_dict("squad") # prints Nothing
>
> # ran the following lines for 1min and seems to work -> didn't check for all datasets though
> # api = hf_api.HfApi()
> # all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]
> # for dataset in all_datasets:
> # scan_for_nested_unnecessary_dict(dataset)
> ```
Great, I will try it | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | 308 | text: Remove unnecessary nested dict
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378
> @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.
>
> ```python
> #!/usr/bin/env python3
>
> from nlp import prepare_module, DownloadConfig, import_main_class, hf_api
> import tempfile
>
>
> def scan_for_nested_unnecessary_dict(dataset_name):
>
> def load_builder_class(dataset_name):
> module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))
> return import_main_class(module_path)
>
> def load_configs(dataset_name):
> builder_cls = load_builder_class(dataset_name)
> if len(builder_cls.BUILDER_CONFIGS) == 0:
> return [None]
> return builder_cls.BUILDER_CONFIGS
>
> def scan_features_for_nested_dict(features):
> is_sequence = False
> if hasattr(features, "_type"):
> if features._type != 'Sequence':
> return False
> else:
> is_sequence = True
> features = features.feature
>
> if isinstance(features, list):
> for value in features:
> if scan_features_for_nested_dict(value):
> return True
> return False
>
> elif isinstance(features, dict):
> for key, value in features.items():
> if is_sequence and len(features.keys()) == 1 and hasattr(features[key], "_type") and features[key]._type != "Sequence":
> return True
> if scan_features_for_nested_dict(value):
> return True
> return False
> else:
> raise ValueError(f"{features} should be either a list of a dict")
>
> configs = load_configs(dataset_name)
>
> for config in configs:
> with tempfile.TemporaryDirectory() as processed_temp_dir:
> # create config and dataset
> dataset_builder_cls = load_builder_class(dataset_name)
> name = config.name if config is not None else None
> dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)
>
> is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)
> if is_nested_dict_in_dataset:
> print(f"{dataset_name} with {name} needs refactoring")
>
>
> if __name__ == "__main__":
> scan_for_nested_unnecessary_dict("race") # prints True
> scan_for_nested_unnecessary_dict("mlqa") # prints True
> scan_for_nested_unnecessary_dict("squad") # prints Nothing
>
> # ran the following lines for 1min and seems to work -> didn't check for all datasets though
> # api = hf_api.HfApi()
> # all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]
> # for dataset in all_datasets:
> # scan_for_nested_unnecessary_dict(dataset)
> ```
Great, I will try it |
https://github.com/huggingface/datasets/pull/385 | Remove unnecessary nested dict | Sorry for that, apparently there are other datasets that could have unnecessary nested dicts.
We can have another PR to scan and fix the other datasets.
| This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | 26 | text: Remove unnecessary nested dict
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378
Sorry for that, apparently there are other datasets that could have unnecessary nested dicts.
We can have another PR to scan and fix the other datasets.
|
https://github.com/huggingface/datasets/pull/383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help me find out where I could have messed things up :)
Also, the real and dummy data tests passed before committing and pushing my changes.
Thanks a lot in advance!
```
=================================== FAILURES ===================================
____________________ AWSDatasetTest.test_load_dataset_text _____________________
self = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>
dataset_name = 'text'
def test_load_dataset(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name)[:1]
> self.dataset_tester.check_load_dataset(dataset_name, configs)
tests/test_dataset_common.py:243:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:137: in check_load_dataset
try_from_hf_gcs=False,
../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7efa744ffb70>
dl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7efb304c52b0>
def _split_generators(self, dl_manager):
""" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].
If str or List[str], then the dataset returns only the 'train' split.
If dict, then keys should be from the `nlp.Split` enum.
"""
if isinstance(self.config.data_files, (str, list, tuple)):
# Handle case with only one split
files = self.config.data_files
if isinstance(files, str):
files = [files]
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"files": files})]
else:
# Handle case with several splits and a dict mapping
splits = []
for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:
> if split_name in self.config.data_files:
E TypeError: argument of type 'NoneType' is not iterable
../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError
=============================== warnings summary ===============================
...
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text
====== 1 failed, 963 passed, 532 skipped, 5 warnings in 166.33s (0:02:46) ======
Exited with code exit status 1
``` | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| 352 | text: Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help me find out where I could have messed things up :)
Also, the real and dummy data tests passed before committing and pushing my changes.
Thanks a lot in advance!
```
=================================== FAILURES ===================================
____________________ AWSDatasetTest.test_load_dataset_text _____________________
self = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>
dataset_name = 'text'
def test_load_dataset(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name)[:1]
> self.dataset_tester.check_load_dataset(dataset_name, configs)
tests/test_dataset_common.py:243:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:137: in check_load_dataset
try_from_hf_gcs=False,
../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7efa744ffb70>
dl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7efb304c52b0>
def _split_generators(self, dl_manager):
""" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].
If str or List[str], then the dataset returns only the 'train' split.
If dict, then keys should be from the `nlp.Split` enum.
"""
if isinstance(self.config.data_files, (str, list, tuple)):
# Handle case with only one split
files = self.config.data_files
if isinstance(files, str):
files = [files]
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"files": files})]
else:
# Handle case with several splits and a dict mapping
splits = []
for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:
> if split_name in self.config.data_files:
E TypeError: argument of type 'NoneType' is not iterable
../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError
=============================== warnings summary ===============================
...
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text
====== 1 failed, 963 passed, 532 skipped, 5 warnings in 166.33s (0:02:46) ======
Exited with code exit status 1
``` |
https://github.com/huggingface/datasets/pull/383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | @lhoestq Hi Quentin, I was wondering if you could give some feedback on this error from the `run_dataset_script_tests` script. It seems that's coming from a different config builder than the one I added, so I am not sure why this error would occur. Thanks in advance! | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| 46 | text: Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
@lhoestq Hi Quentin, I was wondering if you could give some feedback on this error from the `run_dataset_script_tests` script. It seems that's coming from a different config builder than the one I added, so I am not sure why this error would occur. Thanks in advance! |
https://github.com/huggingface/datasets/pull/383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | Awesome! Thank you for all your comments! 👌 I will update the PR in a bit with all the required changes 🙂
Let me just provide a bit of context for my changes:
I was referring to the GLUE, XTREME and WNUT_17 dataset scripts to build mine (not sure if the new documentation was available last week). This is where I took the naming convention for the citation and description variables. Also, these scripts didn't have the `BUILDER_CONFIG_CLASS = LinceConfig` line so I commented this out thinking I didn't need that; I tried this line in my attempts to make the real and dummy data tests pass but it was not helping.
The problem I was facing was that the tests were passing a default `BuilderConfig` (i.e., `self.config.name` property was set to `'default'` and my custom properties were not available). This means, for example, that within the `def _info(...)` method, I was not able to access the specific fields of my `LinceConfig` class (which is why I have now a global variable `_LINCE_CITATIONS`, to detach the individual citations from the corresponding LinceConfig objects, as well as I am constructing manually the feature infos). This default `BuilderConfig` is why I added the `if not isinstance(self.config, LinceConfig): return []` statement. Otherwise, accessing custom properties like `self.config.colnames` was failing the test because such properties did not exist in the default config (i.e., it was not a `LinceConfig`).
I will update the PR and see if these problems happen in the CI tests.
Thanks again for the follow-up! @lhoestq | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| 255 | text: Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
Awesome! Thank you for all your comments! 👌 I will update the PR in a bit with all the required changes 🙂
Let me just provide a bit of context for my changes:
I was referring to the GLUE, XTREME and WNUT_17 dataset scripts to build mine (not sure if the new documentation was available last week). This is where I took the naming convention for the citation and description variables. Also, these scripts didn't have the `BUILDER_CONFIG_CLASS = LinceConfig` line so I commented this out thinking I didn't need that; I tried this line in my attempts to make the real and dummy data tests pass but it was not helping.
The problem I was facing was that the tests were passing a default `BuilderConfig` (i.e., `self.config.name` property was set to `'default'` and my custom properties were not available). This means, for example, that within the `def _info(...)` method, I was not able to access the specific fields of my `LinceConfig` class (which is why I have now a global variable `_LINCE_CITATIONS`, to detach the individual citations from the corresponding LinceConfig objects, as well as I am constructing manually the feature infos). This default `BuilderConfig` is why I added the `if not isinstance(self.config, LinceConfig): return []` statement. Otherwise, accessing custom properties like `self.config.colnames` was failing the test because such properties did not exist in the default config (i.e., it was not a `LinceConfig`).
I will update the PR and see if these problems happen in the CI tests.
Thanks again for the follow-up! @lhoestq |
https://github.com/huggingface/datasets/pull/383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | Ok I see !
To give you more details: the line `BUILDER_CONFIG_CLASS = LinceConfig` tells the tests how to instantiate a config for this dataset. Therefore if you have this line you should have all the fields of your config available.
To fix the errors you get you'll have to, first, have the `BUILDER_CONFIG_CLASS = LinceConfig` line, and second, add default values for the parameters of your config (or the tests functions will be unable to instantiate it by calling `LinceConfig()`.
An example of dataset with a custom config with additional filed like this one is [biomrc](https://github.com/huggingface/nlp/blob/master/datasets/biomrc/biomrc.py).
Feel free to give a look at it if you want. | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| 108 | text: Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
Ok I see !
To give you more details: the line `BUILDER_CONFIG_CLASS = LinceConfig` tells the tests how to instantiate a config for this dataset. Therefore if you have this line you should have all the fields of your config available.
To fix the errors you get you'll have to, first, have the `BUILDER_CONFIG_CLASS = LinceConfig` line, and second, add default values for the parameters of your config (or the tests functions will be unable to instantiate it by calling `LinceConfig()`.
An example of dataset with a custom config with additional filed like this one is [biomrc](https://github.com/huggingface/nlp/blob/master/datasets/biomrc/biomrc.py).
Feel free to give a look at it if you want. |
https://github.com/huggingface/datasets/pull/383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | Thanks for the reference!
I just updated the PR with the suggested changes. It seems the CI failed on the same test you said we could ignore, so I guess it's okay :)
Please let me know if there is something else I may need to change. | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| 47 | text: Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
Thanks for the reference!
I just updated the PR with the suggested changes. It seems the CI failed on the same test you said we could ignore, so I guess it's okay :)
Please let me know if there is something else I may need to change. |
https://github.com/huggingface/datasets/pull/374 | Add dataset post processing for faiss indexes | I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.
The datasets_infos.json and the data on GCS are updated.
And I also added a check to make sure we don't have post processing resources in sub-directories. | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
| 46 | text: Add dataset post processing for faiss indexes
# Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.
The datasets_infos.json and the data on GCS are updated.
And I also added a check to make sure we don't have post processing resources in sub-directories. |
https://github.com/huggingface/datasets/pull/374 | Add dataset post processing for faiss indexes | I added a dummy config that can be loaded with:
```python
wiki = load_dataset("wiki_dpr", "dummy_psgs_w100_no_embeddings", with_index=True, split="train")
```
It's only 6MB of arrow files and 30MB of index | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
| 28 | text: Add dataset post processing for faiss indexes
# Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
I added a dummy config that can be loaded with:
```python
wiki = load_dataset("wiki_dpr", "dummy_psgs_w100_no_embeddings", with_index=True, split="train")
```
It's only 6MB of arrow files and 30MB of index |
https://github.com/huggingface/datasets/pull/366 | Add quora dataset | Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it.
- I've made the questions into a list:
```python
{
"questions": [
{"id":0, "text": "Is this an example question?"},
{"id":1, "text": "Is this a sample question?"},
],
...
}
```
rather than:
```python
{
"question1": "Is this an example question?",
"question2": "Is this a sample question?"
"qid0": 0
"qid1": 1
...
}
```
Not sure if this was the right call.
- Can't find a good citation for this dataset | 19 | text: Add quora dataset
Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it.
- I've made the questions into a list:
```python
{
"questions": [
{"id":0, "text": "Is this an example question?"},
{"id":1, "text": "Is this a sample question?"},
],
...
}
```
rather than:
```python
{
"question1": "Is this an example question?",
"question2": "Is this a sample question?"
"qid0": 0
"qid1": 1
...
}
```
Not sure if this was the right call.
- Can't find a good citation for this dataset
Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now |
https://github.com/huggingface/datasets/pull/364 | add MS MARCO dataset | The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | 24 | text: add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. |
https://github.com/huggingface/datasets/pull/364 | add MS MARCO dataset | Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy. | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | 24 | text: add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy. |
https://github.com/huggingface/datasets/pull/364 | add MS MARCO dataset | The fact that the dummy data for v2.1 is missing shouldn't make the test fails I think. But as you mention the dummy data structure of v1.1 is wrong. I tried to rename files but it does not solve the issue. | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | 41 | text: add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
The fact that the dummy data for v2.1 is missing shouldn't make the test fails I think. But as you mention the dummy data structure of v1.1 is wrong. I tried to rename files but it does not solve the issue. |
https://github.com/huggingface/datasets/pull/364 | add MS MARCO dataset | > Is MS mARCO added to nlp library?I am not able to view it?
Hi @parthplc ,the PR is not merged yet. The dummy data structure is still failing. Maybe @patrickvonplaten can help with it. | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | 35 | text: add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
> Is MS mARCO added to nlp library?I am not able to view it?
Hi @parthplc ,the PR is not merged yet. The dummy data structure is still failing. Maybe @patrickvonplaten can help with it. |
https://github.com/huggingface/datasets/pull/364 | add MS MARCO dataset | Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever! | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | 16 | text: add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever! |
https://github.com/huggingface/datasets/pull/364 | add MS MARCO dataset | > Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!
thanks | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | 18 | text: add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
> Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!
thanks |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing script you sent me earlier since it ended up being tremendously helpful. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 73 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing script you sent me earlier since it ended up being tremendously helpful. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Okay, I just converted the MultiArray class to Array2D, and got rid of all those "globals()"!
The main issues I had were that when including a "pa.ExtensionType" as a column, the ordinary methods to batch the data would not work and it would throw me some mysterious error, so I first cleaned up my code to order the row to match the schema (because when including extension types the row is disordered ) and then made each row a pa.Table and then concatenated all the tables. Also each n-dimensional vector class we implement will be size invariant which is some good news. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 102 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Okay, I just converted the MultiArray class to Array2D, and got rid of all those "globals()"!
The main issues I had were that when including a "pa.ExtensionType" as a column, the ordinary methods to batch the data would not work and it would throw me some mysterious error, so I first cleaned up my code to order the row to match the schema (because when including extension types the row is disordered ) and then made each row a pa.Table and then concatenated all the tables. Also each n-dimensional vector class we implement will be size invariant which is some good news. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Okay awesome! I just added your suggestions and changed up my recursive functions.
Here is the traceback for the when I use the original code in the write_on_file method:
```
Traceback (most recent call last):
File "<stdin>", line 33, in <module>
File "/home/eltoto/nlp/src/nlp/arrow_writer.py", line 214, in finalize
self.write_on_file()
File "/home/eltoto/nlp/src/nlp/arrow_writer.py", line 134, in write_on_file
pa_array = pa.array(self.current_rows, type=self._type)
File "pyarrow/array.pxi", line 269, in pyarrow.lib.array
File "pyarrow/array.pxi", line 38, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 106, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
shell returned 1
```
I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.
In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(***batch_size***) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 203 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Okay awesome! I just added your suggestions and changed up my recursive functions.
Here is the traceback for the when I use the original code in the write_on_file method:
```
Traceback (most recent call last):
File "<stdin>", line 33, in <module>
File "/home/eltoto/nlp/src/nlp/arrow_writer.py", line 214, in finalize
self.write_on_file()
File "/home/eltoto/nlp/src/nlp/arrow_writer.py", line 134, in write_on_file
pa_array = pa.array(self.current_rows, type=self._type)
File "pyarrow/array.pxi", line 269, in pyarrow.lib.array
File "pyarrow/array.pxi", line 38, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 106, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
shell returned 1
```
I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.
In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(***batch_size***) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | > I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.
Indeed that's weird.
> In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(batch_size) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.
The argument of `pa.Table.to_batches` is not `batch_size` but `max_chunksize`, which means that right now it would have no effects (each chunk is of length 1).
We can fix that just by doing `entries.combine_chunks().to_batches(batch_size)`. In that case it would write by chunk of 1000 which is what we want. I don't think it will slow down the writing by much, but we may have to do a benchmark just to make sure. If speed is ok we could even replace the original code to always write chunks this way.
Do you still have errors that need to be fixed ? | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 219 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
> I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.
Indeed that's weird.
> In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(batch_size) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.
The argument of `pa.Table.to_batches` is not `batch_size` but `max_chunksize`, which means that right now it would have no effects (each chunk is of length 1).
We can fix that just by doing `entries.combine_chunks().to_batches(batch_size)`. In that case it would write by chunk of 1000 which is what we want. I don't think it will slow down the writing by much, but we may have to do a benchmark just to make sure. If speed is ok we could even replace the original code to always write chunks this way.
Do you still have errors that need to be fixed ? |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | @lhoestq Nope all should be good!
Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark? | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 17 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
@lhoestq Nope all should be good!
Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark? |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | > @lhoestq Nope all should be good!
Awesome :)
I think it would be good to start to add some tests then.
You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?
> Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?
That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:
- write speed + read speed a dataset with `nlp.Array2D` features
- write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value("float32")))` features
- write speed + read speed a dataset with `nlp.Sequence(nlp.Value("float32"))` features (same data but flatten)
It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.
What do you think ? | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 146 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
> @lhoestq Nope all should be good!
Awesome :)
I think it would be good to start to add some tests then.
You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?
> Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?
That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:
- write speed + read speed a dataset with `nlp.Array2D` features
- write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value("float32")))` features
- write speed + read speed a dataset with `nlp.Sequence(nlp.Value("float32"))` features (same data but flatten)
It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.
What do you think ? |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | I just tested your code to try to understand better.
- First thing you must know is that we've switched from `dataset._data.to_pandas` to `dataset._data.to_pydict` by default when we call `dataset[0]` in #423 . Right now it raises an error but it can be fixed by adding this method to `ExtensionArray2D`:
```python
def to_pylist(self):
return self.to_numpy().tolist()
```
- Second, I noticed that `ExtensionArray2D.to_numpy()` always return a (5, 5) shape in your example. I thought `ExtensionArray` was for possibly multiple examples and so I was expecting a shape like (1, 5, 5) for example. Did I miss something ?
Therefore when I apply the fix I mentioned (adding to_pylist), it returns one example per row in each image (in your example of 2 images of shape 5x5, I get `len(dataset._data.to_pydict()["image"]) == 10 # True`)
[EDIT] I changed the reshape step in `ExtensionArray2D.to_numpy()` by
```python
numpy_arr = numpy_arr.reshape(len(self), *ExtensionArray2D._construct_shape(self.storage))
```
and it did the job: `len(dataset._data.to_pydict()["image"]) == 2 # True`
- Finally, I was able to make `to_pandas` work though, by implementing custom array dtype in pandas with arrow conversion (I got inspiration from [here](https://gist.github.com/Eastsun/a59fb0438f65e8643cd61d8c98ec4c08) and [here](https://pandas.pydata.org/pandas-docs/version/1.0.0/development/extending.html#compatibility-with-apache-arrow))
Maybe you could add me in your repo so I can open a PR to add these changes to your branch ? | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 206 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
I just tested your code to try to understand better.
- First thing you must know is that we've switched from `dataset._data.to_pandas` to `dataset._data.to_pydict` by default when we call `dataset[0]` in #423 . Right now it raises an error but it can be fixed by adding this method to `ExtensionArray2D`:
```python
def to_pylist(self):
return self.to_numpy().tolist()
```
- Second, I noticed that `ExtensionArray2D.to_numpy()` always return a (5, 5) shape in your example. I thought `ExtensionArray` was for possibly multiple examples and so I was expecting a shape like (1, 5, 5) for example. Did I miss something ?
Therefore when I apply the fix I mentioned (adding to_pylist), it returns one example per row in each image (in your example of 2 images of shape 5x5, I get `len(dataset._data.to_pydict()["image"]) == 10 # True`)
[EDIT] I changed the reshape step in `ExtensionArray2D.to_numpy()` by
```python
numpy_arr = numpy_arr.reshape(len(self), *ExtensionArray2D._construct_shape(self.storage))
```
and it did the job: `len(dataset._data.to_pydict()["image"]) == 2 # True`
- Finally, I was able to make `to_pandas` work though, by implementing custom array dtype in pandas with arrow conversion (I got inspiration from [here](https://gist.github.com/Eastsun/a59fb0438f65e8643cd61d8c98ec4c08) and [here](https://pandas.pydata.org/pandas-docs/version/1.0.0/development/extending.html#compatibility-with-apache-arrow))
Maybe you could add me in your repo so I can open a PR to add these changes to your branch ? |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | > > @lhoestq Nope all should be good!
>
> Awesome :)
>
> I think it would be good to start to add some tests then.
> You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?
>
> > Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?
>
> That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:
>
> * write speed + read speed a dataset with `nlp.Array2D` features
> * write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value("float32")))` features
> * write speed + read speed a dataset with `nlp.Sequence(nlp.Value("float32"))` features (same data but flatten)
> It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.
>
> What do you think ?
Ya! that should be no problem at all, Ill use the timeit module and get back to you with the results sometime over the weekend. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 188 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
> > @lhoestq Nope all should be good!
>
> Awesome :)
>
> I think it would be good to start to add some tests then.
> You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?
>
> > Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?
>
> That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:
>
> * write speed + read speed a dataset with `nlp.Array2D` features
> * write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value("float32")))` features
> * write speed + read speed a dataset with `nlp.Sequence(nlp.Value("float32"))` features (same data but flatten)
> It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.
>
> What do you think ?
Ya! that should be no problem at all, Ill use the timeit module and get back to you with the results sometime over the weekend. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Thank you for all your help getting the pandas and row indexing for the dataset to work! For `print(dataset[0])`, I considered the workaround of doing `print(dataset["col_name"][0])` a temporary solution, but ya, I was never able to figure out how to previously get it to work. I'll add you to my repo right now, let me know if you do not see the invite. Also lastly, it is strange how the to_batches method is not working, so I can check that out while I add some speed tests + add the multi dim test under the unit tests this weekend. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 99 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Thank you for all your help getting the pandas and row indexing for the dataset to work! For `print(dataset[0])`, I considered the workaround of doing `print(dataset["col_name"][0])` a temporary solution, but ya, I was never able to figure out how to previously get it to work. I'll add you to my repo right now, let me know if you do not see the invite. Also lastly, it is strange how the to_batches method is not working, so I can check that out while I add some speed tests + add the multi dim test under the unit tests this weekend. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Sorry for the bit of delay! I just added the tests, the PR into my fork, and some speed tests. It should be fairly easy to add more tests if we need. Do you think there is anything else to checkout? | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 41 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Sorry for the bit of delay! I just added the tests, the PR into my fork, and some speed tests. It should be fairly easy to add more tests if we need. Do you think there is anything else to checkout? |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Cool thanks for adding the tests :)
Next step is merge master into this branch.
Not sure I understand what you did in your last commit, but it looks like you discarded all the changes from master ^^'
We've done some changes in the features logic on master, so let me know if you need help merging it.
As soon as we've merged from master, we'll have to make sure that we have extensive tests and we'll be good to do !
About the lxmert dataset, we can probably keep it for another PR as soon as we have working 2d features. What do you think ? | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 107 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Cool thanks for adding the tests :)
Next step is merge master into this branch.
Not sure I understand what you did in your last commit, but it looks like you discarded all the changes from master ^^'
We've done some changes in the features logic on master, so let me know if you need help merging it.
As soon as we've merged from master, we'll have to make sure that we have extensive tests and we'll be good to do !
About the lxmert dataset, we can probably keep it for another PR as soon as we have working 2d features. What do you think ? |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | We might want to merge this after tomorrow's release though to avoid potential side effects @lhoestq | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 16 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
We might want to merge this after tomorrow's release though to avoid potential side effects @lhoestq |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Yep I'm sure we can have it not for tomorrow's release but for the next one ;) | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 17 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Yep I'm sure we can have it not for tomorrow's release but for the next one ;) |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | haha, when I tried to rebase, I ran into some conflicts. In that last commit, I restored the features.py from the previous commit on the branch in my fork because upon updating to master, the pandasdtypemanger and pandas extension types disappeared. If you actually could help me with merging in what is needed, that would actually help a lot.
Other than that, ya let me go ahead and move the dataloader code out of this PR. Perhaps we could discuss in the slack channelk soon about what to do with that because we can either just support the pretraining corpus for lxmert or try to implement the full COCO and visual genome datasets (+VQA +GQA) which im sure people would be pretty happy about.
Also we can talk more tests soon too when you are free.
Goodluck on the release tomorrow guys! | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 142 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
haha, when I tried to rebase, I ran into some conflicts. In that last commit, I restored the features.py from the previous commit on the branch in my fork because upon updating to master, the pandasdtypemanger and pandas extension types disappeared. If you actually could help me with merging in what is needed, that would actually help a lot.
Other than that, ya let me go ahead and move the dataloader code out of this PR. Perhaps we could discuss in the slack channelk soon about what to do with that because we can either just support the pretraining corpus for lxmert or try to implement the full COCO and visual genome datasets (+VQA +GQA) which im sure people would be pretty happy about.
Also we can talk more tests soon too when you are free.
Goodluck on the release tomorrow guys! |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Not sure why github thinks there are conflicts here, as I just rebased from the current master branch.
Merging into master locally works on my side without conflicts
```
git checkout master
git reset --hard origin/master
git merge --no-ff eltoto1219/support_multi_dim_tensors_for_images
Merge made by the 'recursive' strategy.
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py | 89 +++++++++++++++++++++++++++++++++++++
datasets/lxmert_pretraining_beta/test_multi_array.py | 45 +++++++++++++++++++
datasets/lxmert_pretraining_beta/to_arrow_data.py | 371 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
src/nlp/arrow_dataset.py | 24 +++++-----
src/nlp/arrow_writer.py | 22 ++++++++--
src/nlp/features.py | 229 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
tests/test_array_2d.py | 210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
7 files changed, 969 insertions(+), 21 deletions(-)
create mode 100644 datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py
create mode 100644 datasets/lxmert_pretraining_beta/test_multi_array.py
create mode 100644 datasets/lxmert_pretraining_beta/to_arrow_data.py
create mode 100644 tests/test_array_2d.py
``` | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 98 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Not sure why github thinks there are conflicts here, as I just rebased from the current master branch.
Merging into master locally works on my side without conflicts
```
git checkout master
git reset --hard origin/master
git merge --no-ff eltoto1219/support_multi_dim_tensors_for_images
Merge made by the 'recursive' strategy.
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py | 89 +++++++++++++++++++++++++++++++++++++
datasets/lxmert_pretraining_beta/test_multi_array.py | 45 +++++++++++++++++++
datasets/lxmert_pretraining_beta/to_arrow_data.py | 371 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
src/nlp/arrow_dataset.py | 24 +++++-----
src/nlp/arrow_writer.py | 22 ++++++++--
src/nlp/features.py | 229 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
tests/test_array_2d.py | 210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
7 files changed, 969 insertions(+), 21 deletions(-)
create mode 100644 datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py
create mode 100644 datasets/lxmert_pretraining_beta/test_multi_array.py
create mode 100644 datasets/lxmert_pretraining_beta/to_arrow_data.py
create mode 100644 tests/test_array_2d.py
``` |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | I put everything inside one commit from the master branch but the merge conflicts on github'side were still there for some reason.
Closing and re-opening the PR fixed the conflict check on github's side. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 34 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
I put everything inside one commit from the master branch but the merge conflicts on github'side were still there for some reason.
Closing and re-opening the PR fixed the conflict check on github's side. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Almost done ! It still needs a pass on the docs/comments and maybe a few more tests.
I had to do several changes for type inference in the ArrowWriter to make it support custom types. | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 35 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Almost done ! It still needs a pass on the docs/comments and maybe a few more tests.
I had to do several changes for type inference in the ArrowWriter to make it support custom types. |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | Ok this is now ready for review ! Thanks for your awesome work in this @eltoto1219
Summary of the changes:
- added new feature type `Array2D`, that can be instantiated like `Array2D("float32")` for example
- added pyarrow extension type `Array2DExtensionType` and array `Array2DExtensionArray` that take care of converting from and to arrow. `Array2DExtensionType`'s storage is a list of list of any pyarrow array.
- added pandas extension type `PandasArrayExtensionType` and array `PandasArrayExtensionArray` for conversion from and to arrow/python objects
- refactor of the `ArrowWriter` write and write_batch functions to support extension types while preserving the type inference behavior.
- added a utility object `TypedSequence` that is helpful to combine extension arrays and type inference inside the writer's methods.
- added speed test for sequences writing (printed as warnings in pytest)
- breaking: set disable_nullable to False by default as pyarrow's type inference returns nullable fields
And there are plenty of new tests, mainly in `test_array2d.py` and `test_arrow_writer.py`.
Note that there are some collisions in `arrow_dataset.py` with #513 so let's be careful when we'll merge this one.
I know this is a big PR so feel free to ask questions | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 189 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
Ok this is now ready for review ! Thanks for your awesome work in this @eltoto1219
Summary of the changes:
- added new feature type `Array2D`, that can be instantiated like `Array2D("float32")` for example
- added pyarrow extension type `Array2DExtensionType` and array `Array2DExtensionArray` that take care of converting from and to arrow. `Array2DExtensionType`'s storage is a list of list of any pyarrow array.
- added pandas extension type `PandasArrayExtensionType` and array `PandasArrayExtensionArray` for conversion from and to arrow/python objects
- refactor of the `ArrowWriter` write and write_batch functions to support extension types while preserving the type inference behavior.
- added a utility object `TypedSequence` that is helpful to combine extension arrays and type inference inside the writer's methods.
- added speed test for sequences writing (printed as warnings in pytest)
- breaking: set disable_nullable to False by default as pyarrow's type inference returns nullable fields
And there are plenty of new tests, mainly in `test_array2d.py` and `test_arrow_writer.py`.
Note that there are some collisions in `arrow_dataset.py` with #513 so let's be careful when we'll merge this one.
I know this is a big PR so feel free to ask questions |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | I'll add Array3D, 4D.. tomorrow but it should take only a few lines. The rest won't change | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 17 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
I'll add Array3D, 4D.. tomorrow but it should take only a few lines. The rest won't change |
https://github.com/huggingface/datasets/pull/363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | I took your comments into account and I added Array[3-5]D.
I changed the storage type to fixed lengths lists. I had to update the `to_numpy` function because of that. Indeed slicing a FixedLengthListArray returns a view a of the original array, while in the previous case slicing a ListArray copies the storage.
| nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | 52 | text: Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later)
I took your comments into account and I added Array[3-5]D.
I changed the storage type to fixed lengths lists. I had to update the `to_numpy` function because of that. Indeed slicing a FixedLengthListArray returns a view a of the original array, while in the previous case slicing a ListArray copies the storage.
|
https://github.com/huggingface/datasets/pull/358 | Starting to add some real doc | Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)
This first version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html | Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
Also:
- fix a bug in `train_test_split`
- update the `csv` script
- add a verbose argument to the dataset processing methods
Still missing:
- doc for the metrics
- how to directly upload a community provided dataset with the CLI
- clean up more docstrings
- add the `features` argument to `load_dataset` (should be another PR) | 37 | text: Starting to add some real doc
Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
Also:
- fix a bug in `train_test_split`
- update the `csv` script
- add a verbose argument to the dataset processing methods
Still missing:
- doc for the metrics
- how to directly upload a community provided dataset with the CLI
- clean up more docstrings
- add the `features` argument to `load_dataset` (should be another PR)
Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)
This first version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html |
https://github.com/huggingface/datasets/pull/357 | Add hashes to cnn_dailymail | Looks you to me :)
Could you also update the json file that goes with the dataset script by doing
```
nlp-cli test ./datasets/cnn_dailymail --save_infos --all_configs
```
It will update the features metadata and the size of the dataset with your changes. | The URL hashes are helpful for comparing results from other sources. | 42 | text: Add hashes to cnn_dailymail
The URL hashes are helpful for comparing results from other sources.
Looks you to me :)
Could you also update the json file that goes with the dataset script by doing
```
nlp-cli test ./datasets/cnn_dailymail --save_infos --all_configs
```
It will update the features metadata and the size of the dataset with your changes. |
https://github.com/huggingface/datasets/pull/354 | More faiss control | > Ok, so we're getting rid of the `FaissGpuOptions`?
We support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index and then use `custom_index=...` | Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples | 58 | text: More faiss control
Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples
> Ok, so we're getting rid of the `FaissGpuOptions`?
We support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index and then use `custom_index=...` |
https://github.com/huggingface/datasets/pull/352 | 🐛[BugFix]fix seqeval | I think this is good but can you detail a bit the behavior before and after your fix? | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | 18 | text: 🐛[BugFix]fix seqeval
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
I think this is good but can you detail a bit the behavior before and after your fix? |
https://github.com/huggingface/datasets/pull/352 | 🐛[BugFix]fix seqeval | examples:
input: `['B', 'I', 'I', 'O', 'B', 'I']`
before: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`
after: `[('_', 0, 2), ('_', 4, 5)]`
input: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']`
before: `[('LOC', 0, 2), ('TIME', 4, 5)]`
after: `[('ARGM-LOC', 0, 2), ('ARGM-TIME', 4, 5)]`
This is my test code:
```python
from metrics.seqeval.seqeval import end_of_chunk, start_of_chunk
def before_get_entities(seq, suffix=False):
"""Gets entities from sequence.
Args:
seq (list): sequence of labels.
Returns:
list: list of (chunk_type, chunk_start, chunk_end).
"""
if any(isinstance(s, list) for s in seq):
seq = [item for sublist in seq for item in sublist + ['O']]
prev_tag = 'O'
prev_type = ''
begin_offset = 0
chunks = []
for i, chunk in enumerate(seq + ['O']):
if suffix:
tag = chunk[-1]
type_ = chunk.split('-')[0]
else:
tag = chunk[0]
type_ = chunk.split('-')[-1]
if end_of_chunk(prev_tag, tag, prev_type, type_):
chunks.append((prev_type, begin_offset, i - 1))
if start_of_chunk(prev_tag, tag, prev_type, type_):
begin_offset = i
prev_tag = tag
prev_type = type_
return chunks
def after_get_entities(seq, suffix=False):
"""Gets entities from sequence.
Args:
seq (list): sequence of labels.
Returns:
list: list of (chunk_type, chunk_start, chunk_end).
"""
if any(isinstance(s, list) for s in seq):
seq = [item for sublist in seq for item in sublist + ['O']]
prev_tag = 'O'
prev_type = ''
begin_offset = 0
chunks = []
for i, chunk in enumerate(seq + ['O']):
if suffix:
tag = chunk[-1]
type_ = chunk[:-1].rsplit('-', maxsplit=1)[0] or '_'
else:
tag = chunk[0]
type_ = chunk[1:].split('-', maxsplit=1)[-1] or '_'
if end_of_chunk(prev_tag, tag, prev_type, type_):
chunks.append((prev_type, begin_offset, i - 1))
if start_of_chunk(prev_tag, tag, prev_type, type_):
begin_offset = i
prev_tag = tag
prev_type = type_
return chunks
def main():
examples_1 = ['B', 'I', 'I', 'O', 'B', 'I']
print(before_get_entities(examples_1))
print(after_get_entities(examples_1))
examples_2 = ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']
print(before_get_entities(examples_2))
print(after_get_entities(examples_2))
if __name__ == '__main__':
main()
``` | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | 296 | text: 🐛[BugFix]fix seqeval
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
examples:
input: `['B', 'I', 'I', 'O', 'B', 'I']`
before: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`
after: `[('_', 0, 2), ('_', 4, 5)]`
input: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']`
before: `[('LOC', 0, 2), ('TIME', 4, 5)]`
after: `[('ARGM-LOC', 0, 2), ('ARGM-TIME', 4, 5)]`
This is my test code:
```python
from metrics.seqeval.seqeval import end_of_chunk, start_of_chunk
def before_get_entities(seq, suffix=False):
"""Gets entities from sequence.
Args:
seq (list): sequence of labels.
Returns:
list: list of (chunk_type, chunk_start, chunk_end).
"""
if any(isinstance(s, list) for s in seq):
seq = [item for sublist in seq for item in sublist + ['O']]
prev_tag = 'O'
prev_type = ''
begin_offset = 0
chunks = []
for i, chunk in enumerate(seq + ['O']):
if suffix:
tag = chunk[-1]
type_ = chunk.split('-')[0]
else:
tag = chunk[0]
type_ = chunk.split('-')[-1]
if end_of_chunk(prev_tag, tag, prev_type, type_):
chunks.append((prev_type, begin_offset, i - 1))
if start_of_chunk(prev_tag, tag, prev_type, type_):
begin_offset = i
prev_tag = tag
prev_type = type_
return chunks
def after_get_entities(seq, suffix=False):
"""Gets entities from sequence.
Args:
seq (list): sequence of labels.
Returns:
list: list of (chunk_type, chunk_start, chunk_end).
"""
if any(isinstance(s, list) for s in seq):
seq = [item for sublist in seq for item in sublist + ['O']]
prev_tag = 'O'
prev_type = ''
begin_offset = 0
chunks = []
for i, chunk in enumerate(seq + ['O']):
if suffix:
tag = chunk[-1]
type_ = chunk[:-1].rsplit('-', maxsplit=1)[0] or '_'
else:
tag = chunk[0]
type_ = chunk[1:].split('-', maxsplit=1)[-1] or '_'
if end_of_chunk(prev_tag, tag, prev_type, type_):
chunks.append((prev_type, begin_offset, i - 1))
if start_of_chunk(prev_tag, tag, prev_type, type_):
begin_offset = i
prev_tag = tag
prev_type = type_
return chunks
def main():
examples_1 = ['B', 'I', 'I', 'O', 'B', 'I']
print(before_get_entities(examples_1))
print(after_get_entities(examples_1))
examples_2 = ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']
print(before_get_entities(examples_2))
print(after_get_entities(examples_2))
if __name__ == '__main__':
main()
``` |
https://github.com/huggingface/datasets/pull/352 | 🐛[BugFix]fix seqeval | And we can get more examples not correct, such as:
input: `['B', 'I', 'I-I']`
before: `[('B', 0, 0), ('I', 1, 2)]`
after: `[('_', 0, 1), ('I', 2, 2)]`
input: `['B-ARGM-TIME', 'I-ARGM-TIME', 'I-TIME']`
before: `[('TIME', 0, 2)]`
after: `[('ARGM-TIME', 0, 1), ('TIME', 2, 2)]` | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | 43 | text: 🐛[BugFix]fix seqeval
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
And we can get more examples not correct, such as:
input: `['B', 'I', 'I-I']`
before: `[('B', 0, 0), ('I', 1, 2)]`
after: `[('_', 0, 1), ('I', 2, 2)]`
input: `['B-ARGM-TIME', 'I-ARGM-TIME', 'I-TIME']`
before: `[('TIME', 0, 2)]`
after: `[('ARGM-TIME', 0, 1), ('TIME', 2, 2)]` |
https://github.com/huggingface/datasets/pull/349 | Hyperpartisan news detection | Thank you so much for working on this! This is awesome!
How much would it help you if we would remove the manual request?
We are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove this small barrier on our side (so that we then still get the download count from your library). | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
| 73 | text: Hyperpartisan news detection
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
Thank you so much for working on this! This is awesome!
How much would it help you if we would remove the manual request?
We are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove this small barrier on our side (so that we then still get the download count from your library). |
https://github.com/huggingface/datasets/pull/349 | Hyperpartisan news detection | This is an interesting aspect indeed!
Do you want to send me an email (see my homepage) and I'll invite you on our slack channel to talk about that?
@ghomasHudson wanna reach out to me as well? I tried to find your email to invite you without success. | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
| 48 | text: Hyperpartisan news detection
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
This is an interesting aspect indeed!
Do you want to send me an email (see my homepage) and I'll invite you on our slack channel to talk about that?
@ghomasHudson wanna reach out to me as well? I tried to find your email to invite you without success. |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | > @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).
But can I do the dummy data without running `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` first? 🤔 | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 33 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
> @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).
But can I do the dummy data without running `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` first? 🤔 |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | Hi ! I've been busy but I plan to compute the missing metadata soon !
Looking forward to be able to load a memory mapped version of OSCAR :) | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 29 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
Hi ! I've been busy but I plan to compute the missing metadata soon !
Looking forward to be able to load a memory mapped version of OSCAR :) |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | > Hi ! I've been busy but I plan to compute the missing metadata soon !
> Looking forward to be able to load a memory mapped version of OSCAR :)
Amazing! Thanks! 😄 | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 34 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
> Hi ! I've been busy but I plan to compute the missing metadata soon !
> Looking forward to be able to load a memory mapped version of OSCAR :)
Amazing! Thanks! 😄 |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | Hi there, are there any plans to complete this issue soon? I'm planning to use this dataset on a project. Let me know if there's anything I can do to help to finish this 🤗 | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 35 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
Hi there, are there any plans to complete this issue soon? I'm planning to use this dataset on a project. Let me know if there's anything I can do to help to finish this 🤗 |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | Yes it will be added soon :)
Recently the OSCAR data files were moved to another host. We just need to update the script and compute the dataset_infos.json (it will probably take a few days). | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 35 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
Yes it will be added soon :)
Recently the OSCAR data files were moved to another host. We just need to update the script and compute the dataset_infos.json (it will probably take a few days). |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | @lhoestq I've seen in oscar.py that it isn't a dataset script with manual download way. Is that correct?
Some time ago, @pjox had some troubles with his servers providing that dataset 'cause it's really huge. Providing it on an automatic download way seems to be a little bit dangerous for me 😄 | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 52 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
@lhoestq I've seen in oscar.py that it isn't a dataset script with manual download way. Is that correct?
Some time ago, @pjox had some troubles with his servers providing that dataset 'cause it's really huge. Providing it on an automatic download way seems to be a little bit dangerous for me 😄 |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | Now thanks to @pjox 's help OSCAR is hosted on HF's S3, which is probably more robust that the previous servers :)
Also small update on my side:
I launched the computation of the dataset_infos.json file, it will take a few days. | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 42 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
Now thanks to @pjox 's help OSCAR is hosted on HF's S3, which is probably more robust that the previous servers :)
Also small update on my side:
I launched the computation of the dataset_infos.json file, it will take a few days. |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | I've thought that you won't provide the unshuffled version 'cause this comment on oscar.py:
`# TODO(oscar): Implement unshuffled OSCAR`
| I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 19 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
I've thought that you won't provide the unshuffled version 'cause this comment on oscar.py:
`# TODO(oscar): Implement unshuffled OSCAR`
|
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | That TODO is normal, I haven't touched the python script in months (I haven't had the time, sorry), but I guess @lhoestq fixed the paths if he's already working on the metadata. In any case from now on, only the unshuffled versions of OSCAR will be distributed through the hf/datasets library as in any case it is the version most people use to train language models.
If for any reason, you need the shuffled version it will always be available on the [OSCAR website](https://oscar-corpus.com).
Also future versions of OSCAR will be unshuffled only. | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 93 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
That TODO is normal, I haven't touched the python script in months (I haven't had the time, sorry), but I guess @lhoestq fixed the paths if he's already working on the metadata. In any case from now on, only the unshuffled versions of OSCAR will be distributed through the hf/datasets library as in any case it is the version most people use to train language models.
If for any reason, you need the shuffled version it will always be available on the [OSCAR website](https://oscar-corpus.com).
Also future versions of OSCAR will be unshuffled only. |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | @lhoestq just a little detail, is the Oscar version that HF offers the same one that was available on INRIA? By that I mean, have you done any further filtering or removing of data inside it? Thanks a lot! | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 39 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
@lhoestq just a little detail, is the Oscar version that HF offers the same one that was available on INRIA? By that I mean, have you done any further filtering or removing of data inside it? Thanks a lot! |
https://github.com/huggingface/datasets/pull/348 | Add OSCAR dataset | Hello @jchwenger, this is exactly the same (unshuffled) version that's available at Inria. Sadly no further filtering is provided, but after the latest OSCAR audit (https://arxiv.org/abs/2103.12028) we're already working on future versions of OSCAR that will be "filtered" and that will be available on the OSCAR website and hopefully here as well. | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | 52 | text: Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks!
Hello @jchwenger, this is exactly the same (unshuffled) version that's available at Inria. Sadly no further filtering is provided, but after the latest OSCAR audit (https://arxiv.org/abs/2103.12028) we're already working on future versions of OSCAR that will be "filtered" and that will be available on the OSCAR website and hopefully here as well. |
https://github.com/huggingface/datasets/pull/346 | Add emotion dataset | I've tried it and am getting the same error as you.
You could use the text files rather than the pickle:
```
https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt
https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt
https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt
```
Then you would get all 3 splits rather than just the train split. | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | 39 | text: Add emotion dataset
Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
```
I've tried it and am getting the same error as you.
You could use the text files rather than the pickle:
```
https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt
https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt
https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt
```
Then you would get all 3 splits rather than just the train split. |
https://github.com/huggingface/datasets/pull/346 | Add emotion dataset | Thanks a lot @ghomasHudson - silly me for not spotting that!
I'll keep the PR open for now since I'm quite close to wrapping it up. | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | 26 | text: Add emotion dataset
Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
```
Thanks a lot @ghomasHudson - silly me for not spotting that!
I'll keep the PR open for now since I'm quite close to wrapping it up. |
https://github.com/huggingface/datasets/pull/346 | Add emotion dataset | Hi @ghomasHudson your suggestion worked like a charm - the PR is now ready for review 😎 | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | 17 | text: Add emotion dataset
Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
```
Hi @ghomasHudson your suggestion worked like a charm - the PR is now ready for review 😎 |
https://github.com/huggingface/datasets/pull/346 | Add emotion dataset | Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number?
Thank you in advance. | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | 58 | text: Add emotion dataset
Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
```
Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number?
Thank you in advance. |
https://github.com/huggingface/datasets/pull/346 | Add emotion dataset | Hi @juliette-sch! Yes, I believe that having the labels as integers is now the default for many classification datasets. You can access the string label via the `ClassLabel.int2str` function ([docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=int2str#datasets.ClassLabel.int2str)), so you could add a new column to the dataset as follows:
```python
from datasets import load_dataset
emotions = load_dataset("emotion")
def label_int2str(row):
return {"label_name": emotions["train"].features["label"].int2str(row["label"])}
# adds a new column called `label_name`
emotions = emotions.map(label_int2str)
``` | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | 66 | text: Add emotion dataset
Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
```
Hi @juliette-sch! Yes, I believe that having the labels as integers is now the default for many classification datasets. You can access the string label via the `ClassLabel.int2str` function ([docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=int2str#datasets.ClassLabel.int2str)), so you could add a new column to the dataset as follows:
```python
from datasets import load_dataset
emotions = load_dataset("emotion")
def label_int2str(row):
return {"label_name": emotions["train"].features["label"].int2str(row["label"])}
# adds a new column called `label_name`
emotions = emotions.map(label_int2str)
``` |
https://github.com/huggingface/datasets/pull/344 | Search qa | Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ? | This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336 | 18 | text: Search qa
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336
Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ? |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | Really cool @jarednielsen !
Do you think we can make it work with dataset with nested features like `squad` ?
I just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`. | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 52 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
Really cool @jarednielsen !
Do you think we can make it work with dataset with nested features like `squad` ?
I just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`. |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | For datasets with nested features we have two aspects to take into account:
1) There can be nested dict of features. What is done in tensorflow_datasets to make things work is to flatten the dictionaries to end up with one single dictionary. A dict like `{"column1": {"subfeature": ...}}` is converted to `{"column1/subfeature":...}`
2) There can be ragged tensors, i.e. lists of objects with non-fixed shapes. For example in squad there are often multiple possible answers per question. What is done in tensorflow_datasets to make things work is to concatenate everything and add ragged attributes (cf serialization code [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/example_serializer.py)) | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 98 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
For datasets with nested features we have two aspects to take into account:
1) There can be nested dict of features. What is done in tensorflow_datasets to make things work is to flatten the dictionaries to end up with one single dictionary. A dict like `{"column1": {"subfeature": ...}}` is converted to `{"column1/subfeature":...}`
2) There can be ragged tensors, i.e. lists of objects with non-fixed shapes. For example in squad there are often multiple possible answers per question. What is done in tensorflow_datasets to make things work is to concatenate everything and add ragged attributes (cf serialization code [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/example_serializer.py)) |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | I added support for nested dictionaries. A few more design decisions popped up:
_Should we serialize from NumPy arrays or from tf.Tensors?_
- The [tfds example serializer](url) works from NumPy arrays.
- Calling `dset.set_format("tensorflow")` makes `__getitem__` return a tf.Tensor. So serializing from NumPy arrays would mean calling `dset.export()` before setting the format, which is confusing.
- NumPy arrays can be serialized as their underlying datatype (int, float), while tf.Tensors must be converted to strings before serialization. This adds another step when serializing and deserializing, and removes the static-typing advantages of the TFRecord format.
I think we should export directly from the underlying NumPy arrays into TFRecords, rather than using an intermediate step of tf.Tensor.
_Should we serialize lists of dictionaries?_
- The test_format_nested() test creates a list of dictionaries: https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/tests/test_arrow_dataset.py#L278-L288
- This is difficult to serialize effectively, and I'm not aware of any dataset that has this format. SQuAD has a dictionary of lists, such as the `answers` key. Is this necessary? | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 162 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
I added support for nested dictionaries. A few more design decisions popped up:
_Should we serialize from NumPy arrays or from tf.Tensors?_
- The [tfds example serializer](url) works from NumPy arrays.
- Calling `dset.set_format("tensorflow")` makes `__getitem__` return a tf.Tensor. So serializing from NumPy arrays would mean calling `dset.export()` before setting the format, which is confusing.
- NumPy arrays can be serialized as their underlying datatype (int, float), while tf.Tensors must be converted to strings before serialization. This adds another step when serializing and deserializing, and removes the static-typing advantages of the TFRecord format.
I think we should export directly from the underlying NumPy arrays into TFRecords, rather than using an intermediate step of tf.Tensor.
_Should we serialize lists of dictionaries?_
- The test_format_nested() test creates a list of dictionaries: https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/tests/test_arrow_dataset.py#L278-L288
- This is difficult to serialize effectively, and I'm not aware of any dataset that has this format. SQuAD has a dictionary of lists, such as the `answers` key. Is this necessary? |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | Thanks @thomwolf, used dset.flatten() to simplify. That handles the case of nested dictionaries, and then lists can be read into a tf.io.RaggedFeature in the case of something like squad answers. | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 30 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
Thanks @thomwolf, used dset.flatten() to simplify. That handles the case of nested dictionaries, and then lists can be read into a tf.io.RaggedFeature in the case of something like squad answers. |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | @jarednielsen I just checked and indeed we don't have lists of dicts, we can just focus on the squad format as a reference then :) I'll change the test to remove this format that's not supposed to happen | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 38 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
@jarednielsen I just checked and indeed we don't have lists of dicts, we can just focus on the squad format as a reference then :) I'll change the test to remove this format that's not supposed to happen |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | Actually I realised that `flatten` also handles nested things like pyarrow's list<struct> so it's fine :D
This is so cool !
Could you also add a test with a squad-like dataset ? As soon as we have that I think we'll be good to merge @jarednielsen :)
Good job ! | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 50 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
Actually I realised that `flatten` also handles nested things like pyarrow's list<struct> so it's fine :D
This is so cool !
Could you also add a test with a squad-like dataset ? As soon as we have that I think we'll be good to merge @jarednielsen :)
Good job ! |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | I tried to match the format of Dataset.sort() and Dataset.shuffle() with the docstring. What difference are you referring to specifically? | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 20 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
I tried to match the format of Dataset.sort() and Dataset.shuffle() with the docstring. What difference are you referring to specifically? |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | Oh my bad they're fine actually (I was thinking of the backticks that we don't use in the docstrings of the transformers repo for argument names) | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 26 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
Oh my bad they're fine actually (I was thinking of the backticks that we don't use in the docstrings of the transformers repo for argument names) |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | One final thing: now that we have a brand new documentation, could you just add `export` to the list of documented methods in [docs/source/package_reference/main_classes.rst](https://github.com/huggingface/nlp/blob/master/docs/source/package_reference/main_classes.rst) (so that it will appear in the docs [here](https://huggingface.co/nlp/package_reference/main_classes.html)) ?
| Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 34 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
One final thing: now that we have a brand new documentation, could you just add `export` to the list of documented methods in [docs/source/package_reference/main_classes.rst](https://github.com/huggingface/nlp/blob/master/docs/source/package_reference/main_classes.rst) (so that it will appear in the docs [here](https://huggingface.co/nlp/package_reference/main_classes.html)) ?
|
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | Since #403 (it just got merged), we return python objects and not numpy arrays anymore (unless format="numpy" is specified).
Do you think it can break the export method ? Could you try to rebase from master to run the CI to make sure it's fine ? | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 46 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
Since #403 (it just got merged), we return python objects and not numpy arrays anymore (unless format="numpy" is specified).
Do you think it can break the export method ? Could you try to rebase from master to run the CI to make sure it's fine ? |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | Good catch. I fixed it up so it works with the new format. By the way, when dset.format == "numpy", it now returns single items (like `0`) as a 0-dimensional NumPy array. Not sure if that is desired. | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 38 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
Good catch. I fixed it up so it works with the new format. By the way, when dset.format == "numpy", it now returns single items (like `0`) as a 0-dimensional NumPy array. Not sure if that is desired. |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | I played a little bit with the code and it works quite well :)
I found two cases for which it doesn't work though:
- if the features dict depth is > 2 (ex: wikisql), because `flatten` only flattens the first level of nesting (it can be fixed by calling `flatten` several times in a row, see [here](https://issues.apache.org/jira/browse/ARROW-4090))
- Or if there are 2d features (ex: wikisql, `table.rows` is a sequence of sequences of strings), because tf.train.Features only support 1-d lists. That's why tensorflow-datasets flattens these 2-d features to 1-d and adds ragged features that are the shapes of the arrays, so that they can be reconstructed.
I think we can ignore the 2d stuff right now (some work is being done in #363 ), but I'd like to see the `flatten` issue fixed soon
| Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 135 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
I played a little bit with the code and it works quite well :)
I found two cases for which it doesn't work though:
- if the features dict depth is > 2 (ex: wikisql), because `flatten` only flattens the first level of nesting (it can be fixed by calling `flatten` several times in a row, see [here](https://issues.apache.org/jira/browse/ARROW-4090))
- Or if there are 2d features (ex: wikisql, `table.rows` is a sequence of sequences of strings), because tf.train.Features only support 1-d lists. That's why tensorflow-datasets flattens these 2-d features to 1-d and adds ragged features that are the shapes of the arrays, so that they can be reconstructed.
I think we can ignore the 2d stuff right now (some work is being done in #363 ), but I'd like to see the `flatten` issue fixed soon
|
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | That seems like a bug in `pyarrow`, or at least in `flatten()`. Looks like it should be a separate PR. | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 20 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
That seems like a bug in `pyarrow`, or at least in `flatten()`. Looks like it should be a separate PR. |
https://github.com/huggingface/datasets/pull/339 | Add dataset.export() to TFRecords | I made `.flatten` work on our side (it calls pyarrow's flatten several times until it's really flat).
The only datasets that won't work are those with lists of lists of features, which is a rare case. Hopefully we can make this work with the multi-dimensional arrays changes we're also doing.
I think we can merge now :) cc @thomwolf | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | 59 | text: Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
I made `.flatten` work on our side (it calls pyarrow's flatten several times until it's really flat).
The only datasets that won't work are those with lists of lists of features, which is a rare case. Hopefully we can make this work with the multi-dimensional arrays changes we're also doing.
I think we can merge now :) cc @thomwolf |
https://github.com/huggingface/datasets/pull/335 | BioMRC Dataset presented in BioNLP 2020 ACL Workshop | I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-) | 18 | text: BioMRC Dataset presented in BioNLP 2020 ACL Workshop
I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-) |
|
https://github.com/huggingface/datasets/pull/335 | BioMRC Dataset presented in BioNLP 2020 ACL Workshop | ```
=================================== FAILURES ===================================
___________________ AWSDatasetTest.test_load_dataset_pandas ____________________
self = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_pandas>
dataset_name = 'pandas'
def test_load_dataset(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name)[:1]
> self.dataset_tester.check_load_dataset(dataset_name, configs)
tests/test_dataset_common.py:231:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:125: in check_load_dataset
dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True
../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nlp.datasets.pandas.91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926.pandas.Pandas object at 0x7f3b84f655c0>
dl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b84f3d320>
def _split_generators(self, dl_manager):
""" We handle string, list and dicts in datafiles
"""
if isinstance(self.config.data_files, (str, list, tuple)):
files = self.config.data_files
if isinstance(files, str):
files = [files]
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"files": files})]
splits = []
for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:
> if split_name in self.config.data_files:
E TypeError: argument of type 'NoneType' is not iterable
../.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py:23: TypeError
------------------------------ Captured log call -------------------------------
INFO filelock:filelock.py:274 Lock 139893169180856 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpwmbk8e8d
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO filelock:filelock.py:318 Lock 139893169180856 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893610536912 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json
INFO filelock:filelock.py:318 Lock 139893610536912 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO filelock:filelock.py:274 Lock 139893610533608 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmp00hpyxrs
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO filelock:filelock.py:318 Lock 139893610533608 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893610371224 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json
INFO filelock:filelock.py:318 Lock 139893610371224 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
WARNING nlp.builder:builder.py:215 Using custom data configuration default
INFO nlp.builder:builder.py:349 Generating dataset pandas (/tmp/tmp296h8eeg/pandas/default/0.0.0)
INFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source
____________________ AWSDatasetTest.test_load_dataset_text _____________________
self = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>
dataset_name = 'text'
def test_load_dataset(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name)[:1]
> self.dataset_tester.check_load_dataset(dataset_name, configs)
tests/test_dataset_common.py:231:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:125: in check_load_dataset
dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True
../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7f3b6a111550>
dl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b85582908>
def _split_generators(self, dl_manager):
""" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].
If str or List[str], then the dataset returns only the 'train' split.
If dict, then keys should be from the `nlp.Split` enum.
"""
if isinstance(self.config.data_files, (str, list, tuple)):
# Handle case with only one split
files = self.config.data_files
if isinstance(files, str):
files = [files]
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"files": files})]
else:
# Handle case with several splits and a dict mapping
splits = []
for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:
> if split_name in self.config.data_files:
E TypeError: argument of type 'NoneType' is not iterable
../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError
------------------------------ Captured log call -------------------------------
INFO filelock:filelock.py:274 Lock 139893159303656 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpk63omy4v
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO filelock:filelock.py:318 Lock 139893159303656 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893159171352 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json
INFO filelock:filelock.py:318 Lock 139893159171352 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO filelock:filelock.py:274 Lock 139893618479176 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpkeykru_f
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO filelock:filelock.py:318 Lock 139893618479176 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893618423848 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json
INFO filelock:filelock.py:318 Lock 139893618423848 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
WARNING nlp.builder:builder.py:215 Using custom data configuration default
INFO nlp.builder:builder.py:349 Generating dataset text (/tmp/tmpbu67mvue/text/default/0.0.0)
INFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source
=============================== warnings summary ===============================
/home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15
/home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_tydiqa
/home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/tydiqa/42d88245bde7c0db6c0d48c822dcaa26c7299e0b40cace7e8d6a9e3628135125/tydiqa.py:85: DeprecationWarning: invalid escape sequence \G
"""
tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_mwsc
/home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/mwsc/53c0daac11b6794ff62b52a3a46c4f9da1bef68fd664a2f97b8918917aead715/mwsc.py:70: DeprecationWarning: invalid escape sequence \[
pattern = "\[.*\]"
tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_squadshifts
/home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/squadshifts/15536d7296a785325b99f6d84dfdceafa427419dd6caad110eabb5e5b4156cc2/squadshifts.py:47: DeprecationWarning: invalid escape sequence \
"""
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_pandas
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text
===== 2 failed, 934 passed, 516 skipped, 4 warnings in 1562.46s (0:26:02) ======
Exited with code exit status 1
CircleCI received exit code 1
```
I get this failed test on CircleCI , but all the tests that I run locally where successful. The error also seems not to have any, obvious at least, connection with my code.
Any suggestions? Thanks! :-) | 1,069 | text: BioMRC Dataset presented in BioNLP 2020 ACL Workshop
```
=================================== FAILURES ===================================
___________________ AWSDatasetTest.test_load_dataset_pandas ____________________
self = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_pandas>
dataset_name = 'pandas'
def test_load_dataset(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name)[:1]
> self.dataset_tester.check_load_dataset(dataset_name, configs)
tests/test_dataset_common.py:231:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:125: in check_load_dataset
dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True
../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nlp.datasets.pandas.91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926.pandas.Pandas object at 0x7f3b84f655c0>
dl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b84f3d320>
def _split_generators(self, dl_manager):
""" We handle string, list and dicts in datafiles
"""
if isinstance(self.config.data_files, (str, list, tuple)):
files = self.config.data_files
if isinstance(files, str):
files = [files]
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"files": files})]
splits = []
for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:
> if split_name in self.config.data_files:
E TypeError: argument of type 'NoneType' is not iterable
../.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py:23: TypeError
------------------------------ Captured log call -------------------------------
INFO filelock:filelock.py:274 Lock 139893169180856 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpwmbk8e8d
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO filelock:filelock.py:318 Lock 139893169180856 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893610536912 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json
INFO filelock:filelock.py:318 Lock 139893610536912 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO filelock:filelock.py:274 Lock 139893610533608 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmp00hpyxrs
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py
INFO filelock:filelock.py:318 Lock 139893610533608 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893610371224 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json
INFO filelock:filelock.py:318 Lock 139893610371224 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock
WARNING nlp.builder:builder.py:215 Using custom data configuration default
INFO nlp.builder:builder.py:349 Generating dataset pandas (/tmp/tmp296h8eeg/pandas/default/0.0.0)
INFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source
____________________ AWSDatasetTest.test_load_dataset_text _____________________
self = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>
dataset_name = 'text'
def test_load_dataset(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name)[:1]
> self.dataset_tester.check_load_dataset(dataset_name, configs)
tests/test_dataset_common.py:231:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:125: in check_load_dataset
dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True
../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7f3b6a111550>
dl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b85582908>
def _split_generators(self, dl_manager):
""" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].
If str or List[str], then the dataset returns only the 'train' split.
If dict, then keys should be from the `nlp.Split` enum.
"""
if isinstance(self.config.data_files, (str, list, tuple)):
# Handle case with only one split
files = self.config.data_files
if isinstance(files, str):
files = [files]
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"files": files})]
else:
# Handle case with several splits and a dict mapping
splits = []
for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:
> if split_name in self.config.data_files:
E TypeError: argument of type 'NoneType' is not iterable
../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError
------------------------------ Captured log call -------------------------------
INFO filelock:filelock.py:274 Lock 139893159303656 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpk63omy4v
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO filelock:filelock.py:318 Lock 139893159303656 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893159171352 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json
INFO filelock:filelock.py:318 Lock 139893159171352 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO filelock:filelock.py:274 Lock 139893618479176 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpkeykru_f
INFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py
INFO filelock:filelock.py:318 Lock 139893618479176 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.
INFO filelock:filelock.py:274 Lock 139893618423848 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
INFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text
INFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b
INFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py
INFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json
INFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json
INFO filelock:filelock.py:318 Lock 139893618423848 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock
WARNING nlp.builder:builder.py:215 Using custom data configuration default
INFO nlp.builder:builder.py:349 Generating dataset text (/tmp/tmpbu67mvue/text/default/0.0.0)
INFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source
=============================== warnings summary ===============================
/home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15
/home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_tydiqa
/home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/tydiqa/42d88245bde7c0db6c0d48c822dcaa26c7299e0b40cace7e8d6a9e3628135125/tydiqa.py:85: DeprecationWarning: invalid escape sequence \G
"""
tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_mwsc
/home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/mwsc/53c0daac11b6794ff62b52a3a46c4f9da1bef68fd664a2f97b8918917aead715/mwsc.py:70: DeprecationWarning: invalid escape sequence \[
pattern = "\[.*\]"
tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_squadshifts
/home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/squadshifts/15536d7296a785325b99f6d84dfdceafa427419dd6caad110eabb5e5b4156cc2/squadshifts.py:47: DeprecationWarning: invalid escape sequence \
"""
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_pandas
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text
===== 2 failed, 934 passed, 516 skipped, 4 warnings in 1562.46s (0:26:02) ======
Exited with code exit status 1
CircleCI received exit code 1
```
I get this failed test on CircleCI , but all the tests that I run locally where successful. The error also seems not to have any, obvious at least, connection with my code.
Any suggestions? Thanks! :-) |
|
https://github.com/huggingface/datasets/pull/333 | fix variable name typo | Good catch :)
I think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):
```python
bleu = nlp.load_metric(...)
``` | 29 | text: fix variable name typo
Good catch :)
I think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):
```python
bleu = nlp.load_metric(...)
``` |
|
https://github.com/huggingface/datasets/pull/332 | Add wiki_dpr | The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.
One configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing. | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager | 50 | text: Add wiki_dpr
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager
The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.
One configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing. |
https://github.com/huggingface/datasets/pull/332 | Add wiki_dpr | It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager | 20 | text: Add wiki_dpr
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager
It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings |
https://github.com/huggingface/datasets/pull/323 | Add package path to sys when downloading package as github archive | Sorry for the long diff, everything after the imports comes from `black` for code quality :/ | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305 | 16 | text: Add package path to sys when downloading package as github archive
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305
Sorry for the long diff, everything after the imports comes from `black` for code quality :/ |
https://github.com/huggingface/datasets/pull/323 | Add package path to sys when downloading package as github archive | I think it's fine and I can't think of another way to make the import work anyways.
Maybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'
We could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH` | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305 | 64 | text: Add package path to sys when downloading package as github archive
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305
I think it's fine and I can't think of another way to make the import work anyways.
Maybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'
We could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH` |
https://github.com/huggingface/datasets/pull/318 | Multitask | It's definitely going in the right direction ! Thanks for giving it a try
I really like the API.
IMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.
All the formatting methods could easily be added though.
I think there are some parts that will require some work with apache arrow like slicing. I can find a way to do it using pyarrow tables concatenation (I did something similar when implementing `__getitem__` with an input that is a list of indices [here](https://github.com/huggingface/nlp/pull/322/files#diff-73270df8d7f08c62a27e40806e1a5fb0R463-R469)). It is very fast and it allows to have the same output format as a normal Dataset.
Also maybe we should check that not only the columns but also the schemas match ?
And maybe add the `seed` of the shuffling step as an argument ?
| Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound. | 156 | text: Multitask
Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound.
It's definitely going in the right direction ! Thanks for giving it a try
I really like the API.
IMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.
All the formatting methods could easily be added though.
I think there are some parts that will require some work with apache arrow like slicing. I can find a way to do it using pyarrow tables concatenation (I did something similar when implementing `__getitem__` with an input that is a list of indices [here](https://github.com/huggingface/nlp/pull/322/files#diff-73270df8d7f08c62a27e40806e1a5fb0R463-R469)). It is very fast and it allows to have the same output format as a normal Dataset.
Also maybe we should check that not only the columns but also the schemas match ?
And maybe add the `seed` of the shuffling step as an argument ?
|
https://github.com/huggingface/datasets/pull/318 | Multitask | That's an interesting first draft, thanks a lot for that and the user facing API is really nice.
I think we should dive more into this and the questions of #217 before merging the first version though.
In particular, the typical way to do multi-tasking is usually to sample a task and then sample a batch within the selected task. I think we should probably stay be closer to this traditional approach, or at least make it very easy to do, rather than go to close to the T5 approach which is very specific to this paper.
In this regard, it seems important to find some way to address the remarks of @zphang. I'm still wondering if we should not adopt more of a sampling approach rather than an iteration approach. | Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound. | 131 | text: Multitask
Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound.
That's an interesting first draft, thanks a lot for that and the user facing API is really nice.
I think we should dive more into this and the questions of #217 before merging the first version though.
In particular, the typical way to do multi-tasking is usually to sample a task and then sample a batch within the selected task. I think we should probably stay be closer to this traditional approach, or at least make it very easy to do, rather than go to close to the T5 approach which is very specific to this paper.
In this regard, it seems important to find some way to address the remarks of @zphang. I'm still wondering if we should not adopt more of a sampling approach rather than an iteration approach. |
https://github.com/huggingface/datasets/pull/318 | Multitask | @thomwolf Thanks! I mainly wanted to get something working quickly for my own MTL research. I agree with a lot of the points you made so I'll convert this pull request back to a draft.
For your specific point about 'batch-level' multitask mixing, it would be a pretty trivial change to add a `batch_size` parameter and ensure every `batch_size` examples are from the same task. This would certainly work, but would add a notion of 'batches' to a Dataset, which does feel like a 'Sampler-level' concept and not a Dataset one. There's also the possibility of wanting some specific task-level sampling functionality (e.g. applying `SortishSampler` to each task) which would only work with this kind of 2 step sampling approach. My first proposal in the transformers repo was actually a Sampler https://github.com/huggingface/transformers/issues/4340. I wonder whether functionality at the sampler-level has a place in the vision for the `nlp` repo?
I imagine following a sampling approach you'd have to abandon maintaining the same user-facing API as a standard dataset (A shame because replacing a single dataset seamlessly with a multitask one is a really nice user-experience).
Random half-Idea: You could have a class which accepts a list of any iterables (either a Dataset or a DataLoader which already is doing the batching). Not sure what interface you'd present though. hmmm.
There's definitely more discussion to have.
| Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound. | 225 | text: Multitask
Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound.
@thomwolf Thanks! I mainly wanted to get something working quickly for my own MTL research. I agree with a lot of the points you made so I'll convert this pull request back to a draft.
For your specific point about 'batch-level' multitask mixing, it would be a pretty trivial change to add a `batch_size` parameter and ensure every `batch_size` examples are from the same task. This would certainly work, but would add a notion of 'batches' to a Dataset, which does feel like a 'Sampler-level' concept and not a Dataset one. There's also the possibility of wanting some specific task-level sampling functionality (e.g. applying `SortishSampler` to each task) which would only work with this kind of 2 step sampling approach. My first proposal in the transformers repo was actually a Sampler https://github.com/huggingface/transformers/issues/4340. I wonder whether functionality at the sampler-level has a place in the vision for the `nlp` repo?
I imagine following a sampling approach you'd have to abandon maintaining the same user-facing API as a standard dataset (A shame because replacing a single dataset seamlessly with a multitask one is a really nice user-experience).
Random half-Idea: You could have a class which accepts a list of any iterables (either a Dataset or a DataLoader which already is doing the batching). Not sure what interface you'd present though. hmmm.
There's definitely more discussion to have.
|