Datasets:
Language tags incorrect
The lang
column indicates that the following languages are in the data
>>> ds.unique("lang")
{'train': ['en', 'es', 'de', 'ru', 'ja', 'pt-BR', 'ca', 'fr', 'pl', 'vi', 'zh', 'hu', 'ko', 'eu', 'it', 'uk-UA', 'id', 'ar', 'fi', 'tr', 'da', 'th', 'sv', 'cs'], 'validation': ['ru', 'es', 'en', 'ca', 'uk-UA', 'zh', 'it', 'th', 'fr', 'de', 'eu', 'bn', 'pt-BR', 'vi', 'ja', 'hu']}
Yet, the dataset is also tagged with languages such as Dutch
, which are no part of this list. Please remove the language tags that are not relevant so that people do not incorrectly stumble upon this - otherwise great - dataset.
Did you check 2023-04-12_oasst_all.messages.jsonl.gz
or only the ready_for_export trees? According to https://open-assistant.io/stats there are now 6 dutch trees ready - it is possible that at export time for OASST1 all dutch trees were still in growing phase.
I just checked the default that is accessible through datasets
:
import datasets
from pprint import pprint
oasst1 = datasets.load_dataset("OpenAssistant/oasst1")
pprint(oasst1.unique("lang"))
{'train': ['en',
'es',
'de',
'ru',
'ja',
'pt-BR',
'ca',
'fr',
'pl',
'vi',
'zh',
'hu',
'ko',
'eu',
'it',
'uk-UA',
'id',
'ar',
'fi',
'tr',
'da',
'th',
'sv',
'cs'],
'validation': ['ru',
'es',
'en',
'ca',
'uk-UA',
'zh',
'it',
'th',
'fr',
'de',
'eu',
'bn',
'pt-BR',
'vi',
'ja',
'hu']}
I would expect that the latest version is accessible in this way. If that is not correct, how should we access it instead through the datasets API?
From what I've tried, the JSON is malformed but maybe I am doing something wrong.
local_path = hf_hub_download(repo_id="OpenAssistant/oasst1", filename="2023-04-12_oasst_all.messages.jsonl.gz", repo_type="dataset")
with gzip.open(local_path, "rt", encoding="utf-8") as fhin:
data = json.load(fhin)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 1431)
The same happens when downloading the raw file directly
response = requests.get("https://huggingface.co/datasets/OpenAssistant/oasst1/resolve/main/2023-04-12_oasst_all.messages.jsonl.gz")
decompressed_data = gzip.decompress(response.content)
data = json.loads(decompressed_data.decode("utf-8"))
And I also can't seem to decompress it locally @andreaskoepf
> gunzip -dk 2023-04-12_oasst_all.messages.jsonl.gz
gzip: 2023-04-12_oasst_all.messages.jsonl.gz: Too many levels of symbolic links
I just downloaded and verified that I could execute gunzip 2023-04-12_oasst_all.messages.jsonl.gz
without any problems. The symbolic links error that you see is really strange. Maybe could you verify that gzip/gunzip work on your machine? I think normally one doesn't need to specify the -d
using gunzip since gunzip is a script (at least on unbuntu linux) that calls gzip -d ...
.
There is also a small python package called oasst-data
in the Open-Assistant git repository which can be used to read & write these files: https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data
Thanks for the responses @andreaskoepf . I was incorrectly dealing with the data as json instead of jsonl. Indeed, in the "all messages" there is some Dutch data. Any reason why this data is not the default?