Loading methods
Methods for listing and loading datasets:
Datasets
datasets.load_dataset
< source >( path: strname: typing.Optional[str] = Nonedata_dir: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = Nonesplit: typing.Union[str, datasets.splits.Split, NoneType] = Nonecache_dir: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Noneverification_mode: typing.Union[datasets.utils.info_utils.VerificationMode, str, NoneType] = Nonekeep_in_memory: typing.Optional[bool] = Nonesave_infos: bool = Falserevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = Nonestreaming: bool = Falsenum_proc: typing.Optional[int] = Nonestorage_options: typing.Optional[typing.Dict] = Nonetrust_remote_code: bool = None**config_kwargs ) → Dataset or DatasetDict
Parameters
- path (
str
) — Path or name of the dataset.-
if
path
is a dataset repository on the HF hub (list all available datasets withhuggingface_hub.list_datasets
) -> load the dataset from supported files in the repository (csv, json, parquet, etc.) e.g.'username/dataset_name'
, a dataset repository on the HF hub containing the data files. -
if
path
is a local directory -> load the dataset from supported files in the directory (csv, json, parquet, etc.) e.g.'./path/to/directory/with/my/csv/data'
. -
if
path
is the name of a dataset builder anddata_files
ordata_dir
is specified (available builders are “json”, “csv”, “parquet”, “arrow”, “text”, “xml”, “webdataset”, “imagefolder”, “audiofolder”, “videofolder”) -> load the dataset from the files indata_files
ordata_dir
e.g.'parquet'
.
It can also point to a local dataset script but this is not recommended.
-
- name (
str
, optional) — Defining the name of the dataset configuration. - data_dir (
str
, optional) — Defining thedata_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets anddata_files
isNone
, the behavior is equal to passingos.path.join(data_dir, **)
asdata_files
to reference all the files in a directory. - data_files (
str
orSequence
orMapping
, optional) — Path(s) to source data file(s). - split (
Split
orstr
) — Which split of the data to load. IfNone
, will return adict
with all splits (typicallydatasets.Split.TRAIN
anddatasets.Split.TEST
). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets. - cache_dir (
str
, optional) — Directory to read/write data. Defaults to"~/.cache/huggingface/datasets"
. - features (
Features
, optional) — Set the features type to use for this dataset. - download_config (DownloadConfig, optional) — Specific download configuration parameters.
- download_mode (DownloadMode or
str
, defaults toREUSE_DATASET_IF_EXISTS
) — Download/generate mode. - verification_mode (VerificationMode or
str
, defaults toBASIC_CHECKS
) — Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…).Added in 2.9.1
- keep_in_memory (
bool
, defaults toNone
) — Whether to copy the dataset in-memory. IfNone
, the dataset will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the improve performance section. - save_infos (
bool
, defaults toFalse
) — Save the dataset information (checksums/size/splits/…). - revision (Version or
str
, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository. - token (
str
orbool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. IfTrue
, or not specified, will get token from"~/.huggingface"
. - streaming (
bool
, defaults toFalse
) — If set toTrue
, don’t download the data files. Instead, it streams the data progressively while iterating on the dataset. An IterableDataset or IterableDatasetDict is returned instead in this case.Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example. Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats like rar and xz are not yet supported. The tgz format doesn’t allow streaming.
- num_proc (
int
, optional, defaults toNone
) — Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.Added in 2.7.0
- storage_options (
dict
, optional, defaults toNone
) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.Added in 2.11.0
- trust_remote_code (
bool
, defaults toFalse
) — Whether or not to allow for datasets defined on the Hub using a dataset script. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.Added in 2.16.0
Changed in 2.20.0
trust_remote_code
defaults toFalse
if not specified. - **config_kwargs (additional keyword arguments) —
Keyword arguments to be passed to the
BuilderConfig
and used in the DatasetBuilder.
Returns
- if
split
is notNone
: the dataset requested, - if
split
isNone
, a DatasetDict with each split.
or IterableDataset or IterableDatasetDict: if streaming=True
- if
split
is notNone
, the dataset is requested - if
split
isNone
, a~datasets.streaming.IterableDatasetDict
with each split.
Load a dataset from the Hugging Face Hub, or a local dataset.
You can find the list of datasets on the Hub or with huggingface_hub.list_datasets
.
A dataset is a directory that contains some data files in generic formats (JSON, CSV, Parquet, etc.) and possibly in a generic structure (Webdataset, ImageFolder, AudioFolder, VideoFolder, etc.)
This function does the following under the hood:
Load a dataset builder:
- Find the most common data format in the dataset and pick its associated builder (JSON, CSV, Parquet, Webdataset, ImageFolder, AudioFolder, etc.)
- Find which file goes into which split (e.g. train/test) based on file and directory names or on the YAML configuration
- It is also possible to specify
data_files
manually, and which dataset builder to use (e.g. “parquet”).
Run the dataset builder:
In the general case:
Download the data files from the dataset if they are not already available locally or cached.
Process and cache the dataset in typed Arrow tables for caching.
Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python generic types. They can be directly accessed from disk, loaded in RAM or even streamed over the web.
In the streaming case:
- Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it.
Return a dataset built from the requested splits in
split
(default: all).
It can also use a custom dataset builder if the dataset contains a dataset script, but this feature is mostly for backward compatibility. In this case the dataset script file must be named after the dataset repository or directory and end with “.py”.
Example:
Load a dataset from the Hugging Face Hub:
>>> from datasets import load_dataset
>>> ds = load_dataset('cornell-movie-review-data/rotten_tomatoes', split='train')
# Load a subset or dataset configuration (here 'sst2')
>>> from datasets import load_dataset
>>> ds = load_dataset('nyu-mll/glue', 'sst2', split='train')
# Manual mapping of data files to splits
>>> data_files = {'train': 'train.csv', 'test': 'test.csv'}
>>> ds = load_dataset('namespace/your_dataset_name', data_files=data_files)
# Manual selection of a directory to load
>>> ds = load_dataset('namespace/your_dataset_name', data_dir='folder_name')
Load a local dataset:
# Load a CSV file
>>> from datasets import load_dataset
>>> ds = load_dataset('csv', data_files='path/to/local/my_dataset.csv')
# Load a JSON file
>>> from datasets import load_dataset
>>> ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
# Load from a local loading script (not recommended)
>>> from datasets import load_dataset
>>> ds = load_dataset('path/to/local/loading_script/loading_script.py', split='train')
Load an IterableDataset:
>>> from datasets import load_dataset
>>> ds = load_dataset('cornell-movie-review-data/rotten_tomatoes', split='train', streaming=True)
datasets.load_from_disk
< source >( dataset_path: typing.Union[str, bytes, os.PathLike]keep_in_memory: typing.Optional[bool] = Nonestorage_options: typing.Optional[dict] = None ) → Dataset or DatasetDict
Parameters
- dataset_path (
path-like
) — Path (e.g."dataset/train"
) or remote URI (e.g."s3://my-bucket/dataset/train"
) of the Dataset or DatasetDict directory where the dataset/dataset-dict will be loaded from. - keep_in_memory (
bool
, defaults toNone
) — Whether to copy the dataset in-memory. IfNone
, the dataset will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the improve performance section. - storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.9.0
Returns
- If
dataset_path
is a path of a dataset directory: the dataset requested. - If
dataset_path
is a path of a dataset dict directory, a DatasetDict with each split.
Loads a dataset that was previously saved using save_to_disk() from a dataset directory, or
from a filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
datasets.load_dataset_builder
< source >( path: strname: typing.Optional[str] = Nonedata_dir: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = Nonecache_dir: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = Nonestorage_options: typing.Optional[typing.Dict] = Nonetrust_remote_code: typing.Optional[bool] = None_require_default_config_name = True**config_kwargs )
Parameters
- path (
str
) — Path or name of the dataset.-
if
path
is a dataset repository on the HF hub (list all available datasets withhuggingface_hub.list_datasets
) -> load the dataset builder from supported files in the repository (csv, json, parquet, etc.) e.g.'username/dataset_name'
, a dataset repository on the HF hub containing the data files. -
if
path
is a local directory -> load the dataset builder from supported files in the directory (csv, json, parquet, etc.) e.g.'./path/to/directory/with/my/csv/data'
. -
if
path
is the name of a dataset builder anddata_files
ordata_dir
is specified (available builders are “json”, “csv”, “parquet”, “arrow”, “text”, “xml”, “webdataset”, “imagefolder”, “audiofolder”, “videofolder”) -> load the dataset builder from the files indata_files
ordata_dir
e.g.'parquet'
.
It can also point to a local dataset script but this is not recommended.
-
- name (
str
, optional) — Defining the name of the dataset configuration. - data_dir (
str
, optional) — Defining thedata_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets anddata_files
isNone
, the behavior is equal to passingos.path.join(data_dir, **)
asdata_files
to reference all the files in a directory. - data_files (
str
orSequence
orMapping
, optional) — Path(s) to source data file(s). - cache_dir (
str
, optional) — Directory to read/write data. Defaults to"~/.cache/huggingface/datasets"
. - features (Features, optional) — Set the features type to use for this dataset.
- download_config (DownloadConfig, optional) — Specific download configuration parameters.
- download_mode (DownloadMode or
str
, defaults toREUSE_DATASET_IF_EXISTS
) — Download/generate mode. - revision (Version or
str
, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository. - token (
str
orbool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. IfTrue
, or not specified, will get token from"~/.huggingface"
. - storage_options (
dict
, optional, defaults toNone
) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.Added in 2.11.0
- trust_remote_code (
bool
, defaults toFalse
) — Whether or not to allow for datasets defined on the Hub using a dataset script. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.Added in 2.16.0
Changed in 2.20.0
trust_remote_code
defaults toFalse
if not specified. - **config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the BuilderConfig and used in the DatasetBuilder.
Load a dataset builder which can be used to:
- Inspect general information that is required to build a dataset (cache directory, config, dataset info, features, data files, etc.)
- Download and prepare the dataset as Arrow files in the cache
- Get a streaming dataset without downloading or caching anything
You can find the list of datasets on the Hub or with huggingface_hub.list_datasets
.
A dataset is a directory that contains some data files in generic formats (JSON, CSV, Parquet, etc.) and possibly in a generic structure (Webdataset, ImageFolder, AudioFolder, VideoFolder, etc.)
datasets.get_dataset_config_names
< source >( path: strrevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonedynamic_modules_path: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.List, typing.Dict, NoneType] = None**download_kwargs )
Parameters
- path (
str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with
huggingface_hub.list_datasets
), e.g.'rajpurkar/squad'
,'nyu-mll/glue'
or'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
- revision (
Union[str, datasets.Version]
, optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:- it is set to the local version of the lib.
- it will also try to load it from the main branch if it’s not available at the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
- download_config (DownloadConfig, optional) — Specific download configuration parameters.
- download_mode (DownloadMode or
str
, defaults toREUSE_DATASET_IF_EXISTS
) — Download/generate mode. - dynamic_modules_path (
str
, defaults to~/.cache/huggingface/modules/datasets_modules
) — Optional path to the directory in which the dynamic modules are saved. It must have been initialized withinit_dynamic_modules
. By default the datasets are stored inside thedatasets_modules
module. - data_files (
Union[Dict, List, str]
, optional) — Defining the data_files of the dataset configuration. - **download_kwargs (additional keyword arguments) —
Optional attributes for DownloadConfig which will override the attributes in
download_config
if supplied, for exampletoken
.
Get the list of available config names for a particular dataset.
datasets.get_dataset_infos
< source >( path: strdata_files: typing.Union[str, typing.List, typing.Dict, NoneType] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = None**config_kwargs )
Parameters
- path (
str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with
huggingface_hub.list_datasets
), e.g.'rajpurkar/squad'
,'nyu-mll/glue'
or`'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
- revision (
Union[str, datasets.Version]
, optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:- it is set to the local version of the lib.
- it will also try to load it from the main branch if it’s not available at the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
- download_config (DownloadConfig, optional) — Specific download configuration parameters.
- download_mode (DownloadMode or
str
, defaults toREUSE_DATASET_IF_EXISTS
) — Download/generate mode. - data_files (
Union[Dict, List, str]
, optional) — Defining the data_files of the dataset configuration. - token (
str
orbool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. IfTrue
, or not specified, will get token from"~/.huggingface"
. - **config_kwargs (additional keyword arguments) — Optional attributes for builder class which will override the attributes if supplied.
Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.
datasets.get_dataset_split_names
< source >( path: strconfig_name: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = None**config_kwargs )
Parameters
- path (
str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with
huggingface_hub.list_datasets
), e.g.'rajpurkar/squad'
,'nyu-mll/glue'
or'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
- config_name (
str
, optional) — Defining the name of the dataset configuration. - data_files (
str
orSequence
orMapping
, optional) — Path(s) to source data file(s). - download_config (DownloadConfig, optional) — Specific download configuration parameters.
- download_mode (DownloadMode or
str
, defaults toREUSE_DATASET_IF_EXISTS
) — Download/generate mode. - revision (Version or
str
, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository. - token (
str
orbool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. IfTrue
, or not specified, will get token from"~/.huggingface"
. - **config_kwargs (additional keyword arguments) — Optional attributes for builder class which will override the attributes if supplied.
Get the list of available splits for a particular config and dataset.
From files
Configurations used to load data files. They are used when loading local files or a dataset repository:
- local files:
load_dataset("parquet", data_dir="path/to/data/dir")
- dataset repository:
load_dataset("allenai/c4")
You can pass arguments to load_dataset
to configure data loading.
For example you can specify the sep
parameter to define the CsvConfig that is used to load the data:
load_dataset("csv", data_dir="path/to/data/dir", sep="\t")
Text
class datasets.packaged_modules.text.TextConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Noneencoding: str = 'utf-8'encoding_errors: typing.Optional[str] = Nonechunksize: int = 10485760keep_linebreaks: bool = Falsesample_by: str = 'line' )
BuilderConfig for text files.
class datasets.packaged_modules.text.Text
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
CSV
class datasets.packaged_modules.csv.CsvConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonesep: str = ','delimiter: typing.Optional[str] = Noneheader: typing.Union[int, typing.List[int], str, NoneType] = 'infer'names: typing.Optional[typing.List[str]] = Nonecolumn_names: typing.Optional[typing.List[str]] = Noneindex_col: typing.Union[int, str, typing.List[int], typing.List[str], NoneType] = Noneusecols: typing.Union[typing.List[int], typing.List[str], NoneType] = Noneprefix: typing.Optional[str] = Nonemangle_dupe_cols: bool = Trueengine: typing.Optional[typing.Literal['c', 'python', 'pyarrow']] = Noneconverters: typing.Dict[typing.Union[int, str], typing.Callable[[typing.Any], typing.Any]] = Nonetrue_values: typing.Optional[list] = Nonefalse_values: typing.Optional[list] = Noneskipinitialspace: bool = Falseskiprows: typing.Union[int, typing.List[int], NoneType] = Nonenrows: typing.Optional[int] = Nonena_values: typing.Union[str, typing.List[str], NoneType] = Nonekeep_default_na: bool = Truena_filter: bool = Trueverbose: bool = Falseskip_blank_lines: bool = Truethousands: typing.Optional[str] = Nonedecimal: str = '.'lineterminator: typing.Optional[str] = Nonequotechar: str = '"'quoting: int = 0escapechar: typing.Optional[str] = Nonecomment: typing.Optional[str] = Noneencoding: typing.Optional[str] = Nonedialect: typing.Optional[str] = Noneerror_bad_lines: bool = Truewarn_bad_lines: bool = Trueskipfooter: int = 0doublequote: bool = Truememory_map: bool = Falsefloat_precision: typing.Optional[str] = Nonechunksize: int = 10000features: typing.Optional[datasets.features.features.Features] = Noneencoding_errors: typing.Optional[str] = 'strict'on_bad_lines: typing.Literal['error', 'warn', 'skip'] = 'error'date_format: typing.Optional[str] = None )
BuilderConfig for CSV.
class datasets.packaged_modules.csv.Csv
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
JSON
class datasets.packaged_modules.json.JsonConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Noneencoding: str = 'utf-8'encoding_errors: typing.Optional[str] = Nonefield: typing.Optional[str] = Noneuse_threads: bool = Trueblock_size: typing.Optional[int] = Nonechunksize: int = 10485760newlines_in_values: typing.Optional[bool] = None )
BuilderConfig for JSON.
class datasets.packaged_modules.json.Json
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
XML
class datasets.packaged_modules.xml.XmlConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Noneencoding: str = 'utf-8'encoding_errors: typing.Optional[str] = None )
BuilderConfig for xml files.
class datasets.packaged_modules.xml.Xml
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
Parquet
class datasets.packaged_modules.parquet.ParquetConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonebatch_size: typing.Optional[int] = Nonecolumns: typing.Optional[typing.List[str]] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonefilters: typing.Union[pyarrow._compute.Expression, typing.List[tuple], typing.List[typing.List[tuple]], NoneType] = None )
BuilderConfig for Parquet.
class datasets.packaged_modules.parquet.Parquet
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
Arrow
class datasets.packaged_modules.arrow.ArrowConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for Arrow.
class datasets.packaged_modules.arrow.Arrow
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
SQL
class datasets.packaged_modules.sql.SqlConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonesql: typing.Union[str, ForwardRef('sqlalchemy.sql.Selectable')] = Nonecon: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')] = Noneindex_col: typing.Union[str, typing.List[str], NoneType] = Nonecoerce_float: bool = Trueparams: typing.Union[typing.List, typing.Tuple, typing.Dict, NoneType] = Noneparse_dates: typing.Union[typing.List, typing.Dict, NoneType] = Nonecolumns: typing.Optional[typing.List[str]] = Nonechunksize: typing.Optional[int] = 10000features: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for SQL.
class datasets.packaged_modules.sql.Sql
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
Images
class datasets.packaged_modules.imagefolder.ImageFolderConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedrop_labels: bool = Nonedrop_metadata: bool = None )
BuilderConfig for ImageFolder.
class datasets.packaged_modules.imagefolder.ImageFolder
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
Audio
class datasets.packaged_modules.audiofolder.AudioFolderConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedrop_labels: bool = Nonedrop_metadata: bool = None )
Builder Config for AudioFolder.
class datasets.packaged_modules.audiofolder.AudioFolder
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
Videos
class datasets.packaged_modules.videofolder.VideoFolderConfig
< source >( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedrop_labels: bool = Nonedrop_metadata: bool = None )
BuilderConfig for ImageFolder.
class datasets.packaged_modules.videofolder.VideoFolder
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )
WebDataset
class datasets.packaged_modules.webdataset.WebDataset
< source >( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Nonerepo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = None**config_kwargs )